Test Report: Hyper-V_Windows 17977

                    
                      afe619924c08f9e8f87f8c65127b26c11ec5ac1e:2024-04-28:34242
                    
                

Test fail (45/193)

Order failed test Duration
29 TestAddons/parallel/Registry 77.5
48 TestForceSystemdFlag 638.99
55 TestErrorSpam/setup 187.49
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 32.67
80 TestFunctional/serial/ExtraConfig 277.7
81 TestFunctional/serial/ComponentHealth 181.07
84 TestFunctional/serial/InvalidService 4.26
86 TestFunctional/parallel/ConfigCmd 1.45
90 TestFunctional/parallel/StatusCmd 188.76
94 TestFunctional/parallel/ServiceCmdConnect 300.98
96 TestFunctional/parallel/PersistentVolumeClaim 492.09
100 TestFunctional/parallel/MySQL 292.02
106 TestFunctional/parallel/NodeLabels 154.44
116 TestFunctional/parallel/ImageCommands/ImageListShort 45.75
117 TestFunctional/parallel/ImageCommands/ImageListTable 47.73
118 TestFunctional/parallel/ImageCommands/ImageListJson 60.29
119 TestFunctional/parallel/ImageCommands/ImageListYaml 60.27
120 TestFunctional/parallel/ImageCommands/ImageBuild 120.49
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 78.26
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 120.77
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 7.74
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 4.23
134 TestFunctional/parallel/DockerEnv/powershell 451.3
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 120.43
136 TestFunctional/parallel/ServiceCmd/DeployApp 2.18
137 TestFunctional/parallel/ServiceCmd/List 6.72
138 TestFunctional/parallel/ServiceCmd/JSONOutput 6.72
139 TestFunctional/parallel/ServiceCmd/HTTPS 6.75
140 TestFunctional/parallel/ServiceCmd/Format 6.77
141 TestFunctional/parallel/ServiceCmd/URL 6.7
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 60.29
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.36
155 TestMultiControlPlane/serial/StartCluster 441.77
156 TestMultiControlPlane/serial/DeployApp 722.81
157 TestMultiControlPlane/serial/PingHostFromPods 44.37
158 TestMultiControlPlane/serial/AddWorkerNode 258.72
160 TestMultiControlPlane/serial/HAppyAfterClusterStart 50.32
161 TestMultiControlPlane/serial/CopyFile 66.04
162 TestMultiControlPlane/serial/StopSecondaryNode 94.55
163 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 43.65
164 TestMultiControlPlane/serial/RestartSecondaryNode 160.36
220 TestMultiNode/serial/PingHostFrom2Pods 55.09
227 TestMultiNode/serial/RestartKeepsNodes 439.92
228 TestMultiNode/serial/DeleteNode 88.65
238 TestRunningBinaryUpgrade 10800.452
x
+
TestAddons/parallel/Registry (77.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 22.3703ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-f55wd" [8443b100-9b05-4d9b-a6d1-b051457fd394] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0337372s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xrxjb" [9a50f8e6-ee8b-4f29-bb80-1c32b8a4c42f] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.011339s
addons_test.go:340: (dbg) Run:  kubectl --context addons-610300 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-610300 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-610300 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (11.4794994s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-610300 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-610300 ip: (2.3125284s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0428 16:15:59.290282   10064 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-610300 ip"
2024/04/28 16:16:01 [DEBUG] GET http://172.27.234.130:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-610300 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-610300 addons disable registry --alsologtostderr -v=1: (15.5053343s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-610300 -n addons-610300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-610300 -n addons-610300: (12.1668744s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-610300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-610300 logs -n 25: (9.9122766s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-808700 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:08 PDT |                     |
	|         | -p download-only-808700                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:08 PDT | 28 Apr 24 16:08 PDT |
	| delete  | -p download-only-808700                                                                     | download-only-808700 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:08 PDT | 28 Apr 24 16:08 PDT |
	| start   | -o=json --download-only                                                                     | download-only-975500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:08 PDT |                     |
	|         | -p download-only-975500                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:09 PDT | 28 Apr 24 16:09 PDT |
	| delete  | -p download-only-975500                                                                     | download-only-975500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:09 PDT | 28 Apr 24 16:09 PDT |
	| delete  | -p download-only-808700                                                                     | download-only-808700 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:09 PDT | 28 Apr 24 16:09 PDT |
	| delete  | -p download-only-975500                                                                     | download-only-975500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:09 PDT | 28 Apr 24 16:09 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-654900 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:09 PDT |                     |
	|         | binary-mirror-654900                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:64339                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-654900                                                                     | binary-mirror-654900 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:09 PDT | 28 Apr 24 16:09 PDT |
	| addons  | enable dashboard -p                                                                         | addons-610300        | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:09 PDT |                     |
	|         | addons-610300                                                                               |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-610300        | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:09 PDT |                     |
	|         | addons-610300                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-610300 --wait=true                                                                | addons-610300        | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:09 PDT | 28 Apr 24 16:15 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-610300 addons                                                                        | addons-610300        | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:15 PDT | 28 Apr 24 16:15 PDT |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ssh     | addons-610300 ssh cat                                                                       | addons-610300        | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:15 PDT | 28 Apr 24 16:16 PDT |
	|         | /opt/local-path-provisioner/pvc-449e89c9-f392-43ed-ae7e-bcdaa8a76677_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-610300 ip                                                                            | addons-610300        | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:15 PDT | 28 Apr 24 16:16 PDT |
	| addons  | addons-610300 addons disable                                                                | addons-610300        | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:16 PDT | 28 Apr 24 16:16 PDT |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-610300 addons disable                                                                | addons-610300        | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:16 PDT | 28 Apr 24 16:16 PDT |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-610300 addons disable                                                                | addons-610300        | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:16 PDT | 28 Apr 24 16:16 PDT |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-610300        | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:16 PDT |                     |
	|         | -p addons-610300                                                                            |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-610300 addons                                                                        | addons-610300        | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:16 PDT |                     |
	|         | disable csi-hostpath-driver                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-610300        | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:16 PDT |                     |
	|         | addons-610300                                                                               |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 16:09:20
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 16:09:20.797433    7436 out.go:291] Setting OutFile to fd 908 ...
	I0428 16:09:20.798288    7436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:09:20.798288    7436 out.go:304] Setting ErrFile to fd 912...
	I0428 16:09:20.798288    7436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:09:20.823313    7436 out.go:298] Setting JSON to false
	I0428 16:09:20.827152    7436 start.go:129] hostinfo: {"hostname":"minikube1","uptime":3203,"bootTime":1714342556,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 16:09:20.827152    7436 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 16:09:20.834247    7436 out.go:177] * [addons-610300] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 16:09:20.837662    7436 notify.go:220] Checking for updates...
	I0428 16:09:20.841028    7436 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 16:09:20.843727    7436 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 16:09:20.846291    7436 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 16:09:20.850381    7436 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 16:09:20.853080    7436 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 16:09:20.858959    7436 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 16:09:26.070394    7436 out.go:177] * Using the hyperv driver based on user configuration
	I0428 16:09:26.075006    7436 start.go:297] selected driver: hyperv
	I0428 16:09:26.075006    7436 start.go:901] validating driver "hyperv" against <nil>
	I0428 16:09:26.075006    7436 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 16:09:26.124000    7436 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 16:09:26.126151    7436 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 16:09:26.126310    7436 cni.go:84] Creating CNI manager for ""
	I0428 16:09:26.126310    7436 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0428 16:09:26.126386    7436 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0428 16:09:26.126671    7436 start.go:340] cluster config:
	{Name:addons-610300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-610300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 16:09:26.127029    7436 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 16:09:26.131009    7436 out.go:177] * Starting "addons-610300" primary control-plane node in "addons-610300" cluster
	I0428 16:09:26.135802    7436 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 16:09:26.135802    7436 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 16:09:26.135802    7436 cache.go:56] Caching tarball of preloaded images
	I0428 16:09:26.137783    7436 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 16:09:26.137783    7436 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 16:09:26.138355    7436 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\config.json ...
	I0428 16:09:26.138355    7436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\config.json: {Name:mkce6b7f549ae3292b5f0a4b5a2cc3cde4771cec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:09:26.139131    7436 start.go:360] acquireMachinesLock for addons-610300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 16:09:26.139131    7436 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-610300"
	I0428 16:09:26.139131    7436 start.go:93] Provisioning new machine with config: &{Name:addons-610300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:addons-610300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 16:09:26.139131    7436 start.go:125] createHost starting for "" (driver="hyperv")
	I0428 16:09:26.141657    7436 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0428 16:09:26.145497    7436 start.go:159] libmachine.API.Create for "addons-610300" (driver="hyperv")
	I0428 16:09:26.145497    7436 client.go:168] LocalClient.Create starting
	I0428 16:09:26.146016    7436 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 16:09:26.243944    7436 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 16:09:26.396216    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 16:09:28.572069    7436 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 16:09:28.572069    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:09:28.579981    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 16:09:30.196415    7436 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 16:09:30.196623    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:09:30.196701    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 16:09:31.572661    7436 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 16:09:31.583154    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:09:31.583331    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 16:09:35.102447    7436 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 16:09:35.111030    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:09:35.114033    7436 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 16:09:35.567336    7436 main.go:141] libmachine: Creating SSH key...
	I0428 16:09:35.914672    7436 main.go:141] libmachine: Creating VM...
	I0428 16:09:35.914672    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 16:09:38.512650    7436 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 16:09:38.512650    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:09:38.522365    7436 main.go:141] libmachine: Using switch "Default Switch"
	I0428 16:09:38.522514    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 16:09:40.154103    7436 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 16:09:40.154103    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:09:40.154431    7436 main.go:141] libmachine: Creating VHD
	I0428 16:09:40.154431    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 16:09:43.609704    7436 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : CFCFF1E3-0E2E-49DF-A7AC-E7D11C1B0117
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 16:09:43.614393    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:09:43.614393    7436 main.go:141] libmachine: Writing magic tar header
	I0428 16:09:43.614597    7436 main.go:141] libmachine: Writing SSH key tar header
	I0428 16:09:43.622352    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 16:09:46.657866    7436 main.go:141] libmachine: [stdout =====>] : 
	I0428 16:09:46.663945    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:09:46.664030    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\disk.vhd' -SizeBytes 20000MB
	I0428 16:09:49.013392    7436 main.go:141] libmachine: [stdout =====>] : 
	I0428 16:09:49.020480    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:09:49.020559    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-610300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0428 16:09:52.851300    7436 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-610300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 16:09:52.862023    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:09:52.862023    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-610300 -DynamicMemoryEnabled $false
	I0428 16:09:54.872578    7436 main.go:141] libmachine: [stdout =====>] : 
	I0428 16:09:54.872681    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:09:54.872755    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-610300 -Count 2
	I0428 16:09:56.834238    7436 main.go:141] libmachine: [stdout =====>] : 
	I0428 16:09:56.845177    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:09:56.845527    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-610300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\boot2docker.iso'
	I0428 16:09:59.156972    7436 main.go:141] libmachine: [stdout =====>] : 
	I0428 16:09:59.156972    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:09:59.159670    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-610300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\disk.vhd'
	I0428 16:10:01.578117    7436 main.go:141] libmachine: [stdout =====>] : 
	I0428 16:10:01.578117    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:01.578117    7436 main.go:141] libmachine: Starting VM...
	I0428 16:10:01.588950    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-610300
	I0428 16:10:04.625257    7436 main.go:141] libmachine: [stdout =====>] : 
	I0428 16:10:04.625257    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:04.625257    7436 main.go:141] libmachine: Waiting for host to start...
	I0428 16:10:04.625257    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:10:06.753168    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:10:06.753168    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:06.753168    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:10:09.166230    7436 main.go:141] libmachine: [stdout =====>] : 
	I0428 16:10:09.166303    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:10.181187    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:10:12.254381    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:10:12.264316    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:12.264402    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:10:14.713055    7436 main.go:141] libmachine: [stdout =====>] : 
	I0428 16:10:14.713055    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:15.719262    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:10:17.744347    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:10:17.744347    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:17.744347    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:10:20.128606    7436 main.go:141] libmachine: [stdout =====>] : 
	I0428 16:10:20.130990    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:21.146488    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:10:23.207301    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:10:23.207301    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:23.207409    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:10:25.515923    7436 main.go:141] libmachine: [stdout =====>] : 
	I0428 16:10:25.527598    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:26.532266    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:10:28.606618    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:10:28.614823    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:28.614890    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:10:31.081277    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:10:31.081277    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:31.081380    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:10:33.053847    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:10:33.053847    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:33.067696    7436 machine.go:94] provisionDockerMachine start ...
	I0428 16:10:33.067962    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:10:35.051156    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:10:35.051156    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:35.051274    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:10:37.393851    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:10:37.393851    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:37.400093    7436 main.go:141] libmachine: Using SSH client type: native
	I0428 16:10:37.407090    7436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.234.130 22 <nil> <nil>}
	I0428 16:10:37.407090    7436 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 16:10:37.541464    7436 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 16:10:37.541464    7436 buildroot.go:166] provisioning hostname "addons-610300"
	I0428 16:10:37.541647    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:10:39.461155    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:10:39.461155    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:39.471048    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:10:41.841858    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:10:41.841858    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:41.848196    7436 main.go:141] libmachine: Using SSH client type: native
	I0428 16:10:41.848886    7436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.234.130 22 <nil> <nil>}
	I0428 16:10:41.848886    7436 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-610300 && echo "addons-610300" | sudo tee /etc/hostname
	I0428 16:10:42.005581    7436 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-610300
	
	I0428 16:10:42.005675    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:10:43.961591    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:10:43.961591    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:43.972371    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:10:46.334981    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:10:46.334981    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:46.347357    7436 main.go:141] libmachine: Using SSH client type: native
	I0428 16:10:46.347357    7436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.234.130 22 <nil> <nil>}
	I0428 16:10:46.347357    7436 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-610300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-610300/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-610300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 16:10:46.490095    7436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 16:10:46.490209    7436 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 16:10:46.490298    7436 buildroot.go:174] setting up certificates
	I0428 16:10:46.490298    7436 provision.go:84] configureAuth start
	I0428 16:10:46.490371    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:10:48.393463    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:10:48.393463    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:48.404710    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:10:50.737123    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:10:50.737123    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:50.737254    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:10:52.658725    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:10:52.658725    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:52.668409    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:10:55.022468    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:10:55.022468    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:55.022833    7436 provision.go:143] copyHostCerts
	I0428 16:10:55.023580    7436 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 16:10:55.024933    7436 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 16:10:55.026419    7436 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 16:10:55.027613    7436 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-610300 san=[127.0.0.1 172.27.234.130 addons-610300 localhost minikube]
	I0428 16:10:55.200392    7436 provision.go:177] copyRemoteCerts
	I0428 16:10:55.222530    7436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 16:10:55.222530    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:10:57.154873    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:10:57.154873    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:57.165698    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:10:59.530303    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:10:59.530303    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:10:59.530662    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:10:59.635644    7436 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4131096s)
	I0428 16:10:59.636323    7436 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 16:10:59.680427    7436 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0428 16:10:59.721658    7436 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 16:10:59.771366    7436 provision.go:87] duration metric: took 13.2810541s to configureAuth
	I0428 16:10:59.771491    7436 buildroot.go:189] setting minikube options for container-runtime
	I0428 16:10:59.772115    7436 config.go:182] Loaded profile config "addons-610300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 16:10:59.772193    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:11:01.681781    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:11:01.681781    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:01.681781    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:11:04.029988    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:11:04.030230    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:04.035107    7436 main.go:141] libmachine: Using SSH client type: native
	I0428 16:11:04.035684    7436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.234.130 22 <nil> <nil>}
	I0428 16:11:04.035787    7436 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 16:11:04.169658    7436 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 16:11:04.169658    7436 buildroot.go:70] root file system type: tmpfs
	I0428 16:11:04.169905    7436 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 16:11:04.170033    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:11:06.052520    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:11:06.052520    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:06.052595    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:11:08.359583    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:11:08.359667    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:08.365841    7436 main.go:141] libmachine: Using SSH client type: native
	I0428 16:11:08.366009    7436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.234.130 22 <nil> <nil>}
	I0428 16:11:08.366590    7436 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 16:11:08.520828    7436 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 16:11:08.520942    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:11:10.439878    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:11:10.440109    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:10.440109    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:11:12.842661    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:11:12.843405    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:12.849093    7436 main.go:141] libmachine: Using SSH client type: native
	I0428 16:11:12.849753    7436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.234.130 22 <nil> <nil>}
	I0428 16:11:12.849753    7436 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 16:11:14.931578    7436 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 16:11:14.931774    7436 machine.go:97] duration metric: took 41.8639751s to provisionDockerMachine
	I0428 16:11:14.931774    7436 client.go:171] duration metric: took 1m48.7861568s to LocalClient.Create
	I0428 16:11:14.931855    7436 start.go:167] duration metric: took 1m48.7862782s to libmachine.API.Create "addons-610300"
	I0428 16:11:14.931923    7436 start.go:293] postStartSetup for "addons-610300" (driver="hyperv")
	I0428 16:11:14.931975    7436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 16:11:14.945420    7436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 16:11:14.945420    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:11:16.888175    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:11:16.888515    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:16.888614    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:11:19.258569    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:11:19.258569    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:19.263743    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:11:19.370489    7436 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4250638s)
	I0428 16:11:19.380537    7436 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 16:11:19.389113    7436 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 16:11:19.389348    7436 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 16:11:19.389858    7436 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 16:11:19.390334    7436 start.go:296] duration metric: took 4.4583532s for postStartSetup
	I0428 16:11:19.393406    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:11:21.360371    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:11:21.360645    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:21.360740    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:11:23.726270    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:11:23.726270    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:23.737458    7436 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\config.json ...
	I0428 16:11:23.740750    7436 start.go:128] duration metric: took 1m57.6014899s to createHost
	I0428 16:11:23.740845    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:11:25.695570    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:11:25.706796    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:25.706796    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:11:28.050760    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:11:28.050760    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:28.059789    7436 main.go:141] libmachine: Using SSH client type: native
	I0428 16:11:28.059789    7436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.234.130 22 <nil> <nil>}
	I0428 16:11:28.059789    7436 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 16:11:28.191657    7436 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714345888.193187198
	
	I0428 16:11:28.191657    7436 fix.go:216] guest clock: 1714345888.193187198
	I0428 16:11:28.191657    7436 fix.go:229] Guest: 2024-04-28 16:11:28.193187198 -0700 PDT Remote: 2024-04-28 16:11:23.7408453 -0700 PDT m=+123.045792301 (delta=4.452341898s)
	I0428 16:11:28.191933    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:11:30.161051    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:11:30.161051    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:30.161366    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:11:32.529106    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:11:32.529106    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:32.535227    7436 main.go:141] libmachine: Using SSH client type: native
	I0428 16:11:32.536066    7436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.234.130 22 <nil> <nil>}
	I0428 16:11:32.536066    7436 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714345888
	I0428 16:11:32.673109    7436 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 28 23:11:28 UTC 2024
	
	I0428 16:11:32.673168    7436 fix.go:236] clock set: Sun Apr 28 23:11:28 UTC 2024
	 (err=<nil>)
	I0428 16:11:32.673168    7436 start.go:83] releasing machines lock for "addons-610300", held for 2m6.5338981s
	I0428 16:11:32.673395    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:11:34.639623    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:11:34.651018    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:34.651298    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:11:37.008255    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:11:37.014608    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:37.018677    7436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 16:11:37.018766    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:11:37.029555    7436 ssh_runner.go:195] Run: cat /version.json
	I0428 16:11:37.029555    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:11:38.993837    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:11:38.993837    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:38.993837    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:11:38.996211    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:11:38.996211    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:38.996794    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:11:41.438166    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:11:41.443492    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:41.443744    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:11:41.468202    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:11:41.468402    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:11:41.468468    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:11:41.660304    7436 ssh_runner.go:235] Completed: cat /version.json: (4.6307443s)
	I0428 16:11:41.660304    7436 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6414986s)
	I0428 16:11:41.674050    7436 ssh_runner.go:195] Run: systemctl --version
	I0428 16:11:41.694971    7436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 16:11:41.706638    7436 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 16:11:41.721153    7436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 16:11:41.748671    7436 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 16:11:41.748671    7436 start.go:494] detecting cgroup driver to use...
	I0428 16:11:41.748671    7436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 16:11:41.792981    7436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 16:11:41.824167    7436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 16:11:41.851986    7436 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 16:11:41.866339    7436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 16:11:41.899884    7436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 16:11:41.929815    7436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 16:11:41.961311    7436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 16:11:41.995364    7436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 16:11:42.030181    7436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 16:11:42.066150    7436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 16:11:42.097169    7436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 16:11:42.129485    7436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 16:11:42.161254    7436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 16:11:42.195252    7436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 16:11:42.383563    7436 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 16:11:42.417189    7436 start.go:494] detecting cgroup driver to use...
	I0428 16:11:42.432406    7436 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 16:11:42.469248    7436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 16:11:42.510455    7436 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 16:11:42.558744    7436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 16:11:42.594856    7436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 16:11:42.632890    7436 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 16:11:42.694907    7436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 16:11:42.720243    7436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 16:11:42.765506    7436 ssh_runner.go:195] Run: which cri-dockerd
	I0428 16:11:42.784417    7436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 16:11:42.803148    7436 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 16:11:42.848926    7436 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 16:11:43.033169    7436 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 16:11:43.210601    7436 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 16:11:43.210853    7436 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 16:11:43.253731    7436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 16:11:43.443489    7436 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 16:11:45.960169    7436 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5157893s)
	I0428 16:11:45.975677    7436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0428 16:11:46.010248    7436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 16:11:46.057440    7436 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0428 16:11:46.254166    7436 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0428 16:11:46.444295    7436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 16:11:46.625296    7436 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0428 16:11:46.664443    7436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 16:11:46.707348    7436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 16:11:46.886269    7436 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0428 16:11:46.999655    7436 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0428 16:11:47.014041    7436 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0428 16:11:47.023070    7436 start.go:562] Will wait 60s for crictl version
	I0428 16:11:47.042071    7436 ssh_runner.go:195] Run: which crictl
	I0428 16:11:47.062706    7436 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 16:11:47.116556    7436 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0428 16:11:47.127964    7436 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 16:11:47.170523    7436 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 16:11:47.206071    7436 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0428 16:11:47.206335    7436 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0428 16:11:47.210310    7436 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0428 16:11:47.210310    7436 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0428 16:11:47.210310    7436 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0428 16:11:47.210310    7436 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d9:46:b5 Flags:up|broadcast|multicast|running}
	I0428 16:11:47.212227    7436 ip.go:210] interface addr: fe80::ddcc:c7f1:f829:ae2f/64
	I0428 16:11:47.212227    7436 ip.go:210] interface addr: 172.27.224.1/20
	I0428 16:11:47.227041    7436 ssh_runner.go:195] Run: grep 172.27.224.1	host.minikube.internal$ /etc/hosts
	I0428 16:11:47.233667    7436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 16:11:47.254193    7436 kubeadm.go:877] updating cluster {Name:addons-610300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:addons-610300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.234.130 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 16:11:47.254644    7436 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 16:11:47.264558    7436 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 16:11:47.288346    7436 docker.go:685] Got preloaded images: 
	I0428 16:11:47.288425    7436 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0428 16:11:47.299783    7436 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 16:11:47.332971    7436 ssh_runner.go:195] Run: which lz4
	I0428 16:11:47.352720    7436 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0428 16:11:47.361903    7436 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0428 16:11:47.362002    7436 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0428 16:11:49.531475    7436 docker.go:649] duration metric: took 2.1887495s to copy over tarball
	I0428 16:11:49.545299    7436 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0428 16:11:54.725598    7436 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.1802937s)
	I0428 16:11:54.725827    7436 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0428 16:11:54.792256    7436 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 16:11:54.812504    7436 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0428 16:11:54.855742    7436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 16:11:55.056070    7436 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 16:12:00.669011    7436 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.6129342s)
	I0428 16:12:00.681474    7436 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 16:12:00.705582    7436 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0428 16:12:00.705582    7436 cache_images.go:84] Images are preloaded, skipping loading
	I0428 16:12:00.705582    7436 kubeadm.go:928] updating node { 172.27.234.130 8443 v1.30.0 docker true true} ...
	I0428 16:12:00.705582    7436 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-610300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.234.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-610300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 16:12:00.716863    7436 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0428 16:12:00.751701    7436 cni.go:84] Creating CNI manager for ""
	I0428 16:12:00.751701    7436 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0428 16:12:00.751701    7436 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 16:12:00.751701    7436 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.234.130 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-610300 NodeName:addons-610300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.234.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.234.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 16:12:00.752299    7436 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.234.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-610300"
	  kubeletExtraArgs:
	    node-ip: 172.27.234.130
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.234.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 16:12:00.766349    7436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 16:12:00.787896    7436 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 16:12:00.801514    7436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0428 16:12:00.820049    7436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0428 16:12:00.852376    7436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 16:12:00.883728    7436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0428 16:12:00.930033    7436 ssh_runner.go:195] Run: grep 172.27.234.130	control-plane.minikube.internal$ /etc/hosts
	I0428 16:12:00.936394    7436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.234.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 16:12:00.975559    7436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 16:12:01.168366    7436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 16:12:01.208728    7436 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300 for IP: 172.27.234.130
	I0428 16:12:01.208753    7436 certs.go:194] generating shared ca certs ...
	I0428 16:12:01.208753    7436 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:12:01.208753    7436 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0428 16:12:01.378694    7436 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt ...
	I0428 16:12:01.378694    7436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt: {Name:mk7a559291b59fd1cacf23acd98c76aadd417440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:12:01.385255    7436 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key ...
	I0428 16:12:01.385255    7436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key: {Name:mkbedd9bb05780b48b3744f1500f6ab6cea55798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:12:01.386920    7436 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0428 16:12:01.783142    7436 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0428 16:12:01.783142    7436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkd3d06d8ce13b6ea5bb86cd17b70e85416bbf21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:12:01.784535    7436 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key ...
	I0428 16:12:01.784535    7436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkf3a613f937d3e2839d9a0e4a8e5134d5e75dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:12:01.786194    7436 certs.go:256] generating profile certs ...
	I0428 16:12:01.787410    7436 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.key
	I0428 16:12:01.787410    7436 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt with IP's: []
	I0428 16:12:01.917957    7436 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt ...
	I0428 16:12:01.917957    7436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: {Name:mk74ba408b18563315982e475143aff71614975b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:12:01.921518    7436 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.key ...
	I0428 16:12:01.921518    7436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.key: {Name:mke2190d86937fc66eeb30203be38eb2d91d8fc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:12:01.923005    7436 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\apiserver.key.8da01d64
	I0428 16:12:01.923005    7436 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\apiserver.crt.8da01d64 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.234.130]
	I0428 16:12:02.116876    7436 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\apiserver.crt.8da01d64 ...
	I0428 16:12:02.116876    7436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\apiserver.crt.8da01d64: {Name:mkaf06a630c350cbcf69c4485ad513af2f02730f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:12:02.118481    7436 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\apiserver.key.8da01d64 ...
	I0428 16:12:02.118481    7436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\apiserver.key.8da01d64: {Name:mkca8bfce77e44736c2d4c17d71bd3f1a35cbcad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:12:02.119995    7436 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\apiserver.crt.8da01d64 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\apiserver.crt
	I0428 16:12:02.126242    7436 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\apiserver.key.8da01d64 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\apiserver.key
	I0428 16:12:02.131936    7436 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\proxy-client.key
	I0428 16:12:02.132915    7436 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\proxy-client.crt with IP's: []
	I0428 16:12:02.479987    7436 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\proxy-client.crt ...
	I0428 16:12:02.479987    7436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\proxy-client.crt: {Name:mkc170756b1899256c67532a9753a349f79fb73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:12:02.490386    7436 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\proxy-client.key ...
	I0428 16:12:02.490386    7436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\proxy-client.key: {Name:mk8be643b50249e2d9017fcf5d7fa3e118d4e2b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:12:02.500868    7436 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0428 16:12:02.500868    7436 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0428 16:12:02.501806    7436 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0428 16:12:02.502127    7436 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0428 16:12:02.502383    7436 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 16:12:02.550242    7436 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0428 16:12:02.593596    7436 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 16:12:02.640021    7436 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 16:12:02.686459    7436 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0428 16:12:02.731674    7436 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0428 16:12:02.776848    7436 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 16:12:02.816373    7436 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0428 16:12:02.856284    7436 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 16:12:02.898653    7436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 16:12:02.941962    7436 ssh_runner.go:195] Run: openssl version
	I0428 16:12:02.965690    7436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 16:12:02.997954    7436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 16:12:03.007349    7436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 16:12:03.019736    7436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 16:12:03.044156    7436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 16:12:03.075797    7436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 16:12:03.079052    7436 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 16:12:03.084624    7436 kubeadm.go:391] StartCluster: {Name:addons-610300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:addons-610300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.234.130 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 16:12:03.085178    7436 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 16:12:03.128143    7436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 16:12:03.162686    7436 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 16:12:03.200274    7436 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 16:12:03.217022    7436 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 16:12:03.217022    7436 kubeadm.go:156] found existing configuration files:
	
	I0428 16:12:03.230297    7436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 16:12:03.247989    7436 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 16:12:03.261937    7436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 16:12:03.293318    7436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 16:12:03.310687    7436 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 16:12:03.323797    7436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 16:12:03.351993    7436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 16:12:03.368478    7436 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 16:12:03.382402    7436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 16:12:03.415129    7436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 16:12:03.432722    7436 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 16:12:03.444347    7436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 16:12:03.463509    7436 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0428 16:12:03.694225    7436 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0428 16:12:16.111203    7436 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0428 16:12:16.111349    7436 kubeadm.go:309] [preflight] Running pre-flight checks
	I0428 16:12:16.111706    7436 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0428 16:12:16.111962    7436 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0428 16:12:16.112268    7436 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0428 16:12:16.112453    7436 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 16:12:16.116162    7436 out.go:204]   - Generating certificates and keys ...
	I0428 16:12:16.116162    7436 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0428 16:12:16.116162    7436 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0428 16:12:16.116162    7436 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0428 16:12:16.116802    7436 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0428 16:12:16.116884    7436 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0428 16:12:16.116884    7436 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0428 16:12:16.117414    7436 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0428 16:12:16.117750    7436 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-610300 localhost] and IPs [172.27.234.130 127.0.0.1 ::1]
	I0428 16:12:16.117750    7436 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0428 16:12:16.117750    7436 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-610300 localhost] and IPs [172.27.234.130 127.0.0.1 ::1]
	I0428 16:12:16.118385    7436 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0428 16:12:16.118420    7436 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0428 16:12:16.118420    7436 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0428 16:12:16.118420    7436 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 16:12:16.118420    7436 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 16:12:16.119020    7436 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 16:12:16.119116    7436 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 16:12:16.119116    7436 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 16:12:16.119116    7436 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 16:12:16.119726    7436 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 16:12:16.119929    7436 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 16:12:16.121950    7436 out.go:204]   - Booting up control plane ...
	I0428 16:12:16.121950    7436 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 16:12:16.121950    7436 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 16:12:16.121950    7436 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 16:12:16.123663    7436 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 16:12:16.123935    7436 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 16:12:16.123965    7436 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0428 16:12:16.124300    7436 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0428 16:12:16.124552    7436 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0428 16:12:16.124780    7436 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00209981s
	I0428 16:12:16.124780    7436 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0428 16:12:16.124780    7436 kubeadm.go:309] [api-check] The API server is healthy after 6.506292002s
	I0428 16:12:16.125192    7436 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0428 16:12:16.125380    7436 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0428 16:12:16.125634    7436 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0428 16:12:16.125880    7436 kubeadm.go:309] [mark-control-plane] Marking the node addons-610300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0428 16:12:16.125880    7436 kubeadm.go:309] [bootstrap-token] Using token: cxjojm.wcuezpglzcjo9jah
	I0428 16:12:16.126702    7436 out.go:204]   - Configuring RBAC rules ...
	I0428 16:12:16.129346    7436 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0428 16:12:16.129536    7436 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0428 16:12:16.129763    7436 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0428 16:12:16.130102    7436 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0428 16:12:16.130510    7436 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0428 16:12:16.130723    7436 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0428 16:12:16.130723    7436 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0428 16:12:16.130723    7436 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0428 16:12:16.130723    7436 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0428 16:12:16.130723    7436 kubeadm.go:309] 
	I0428 16:12:16.131316    7436 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0428 16:12:16.131316    7436 kubeadm.go:309] 
	I0428 16:12:16.131316    7436 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0428 16:12:16.131316    7436 kubeadm.go:309] 
	I0428 16:12:16.131316    7436 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0428 16:12:16.131316    7436 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0428 16:12:16.131316    7436 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0428 16:12:16.131316    7436 kubeadm.go:309] 
	I0428 16:12:16.131316    7436 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0428 16:12:16.131316    7436 kubeadm.go:309] 
	I0428 16:12:16.132286    7436 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0428 16:12:16.132286    7436 kubeadm.go:309] 
	I0428 16:12:16.132509    7436 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0428 16:12:16.132644    7436 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0428 16:12:16.132835    7436 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0428 16:12:16.132870    7436 kubeadm.go:309] 
	I0428 16:12:16.133055    7436 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0428 16:12:16.133222    7436 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0428 16:12:16.133285    7436 kubeadm.go:309] 
	I0428 16:12:16.133396    7436 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token cxjojm.wcuezpglzcjo9jah \
	I0428 16:12:16.133396    7436 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c \
	I0428 16:12:16.133396    7436 kubeadm.go:309] 	--control-plane 
	I0428 16:12:16.133396    7436 kubeadm.go:309] 
	I0428 16:12:16.134195    7436 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0428 16:12:16.134195    7436 kubeadm.go:309] 
	I0428 16:12:16.134195    7436 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token cxjojm.wcuezpglzcjo9jah \
	I0428 16:12:16.134195    7436 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c 
	I0428 16:12:16.134195    7436 cni.go:84] Creating CNI manager for ""
	I0428 16:12:16.134195    7436 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0428 16:12:16.138175    7436 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0428 16:12:16.149300    7436 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0428 16:12:16.172752    7436 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0428 16:12:16.217530    7436 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 16:12:16.231881    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:16.231881    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-610300 minikube.k8s.io/updated_at=2024_04_28T16_12_16_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=addons-610300 minikube.k8s.io/primary=true
	I0428 16:12:16.240712    7436 ops.go:34] apiserver oom_adj: -16
	I0428 16:12:16.376669    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:16.883801    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:17.383100    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:17.895605    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:18.392168    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:18.888309    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:19.379712    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:19.888768    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:20.392991    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:20.883366    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:21.378959    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:21.879097    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:22.380505    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:22.878810    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:23.383994    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:23.888884    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:24.377378    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:24.882851    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:25.386547    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:25.884352    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:26.381886    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:26.878979    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:27.383008    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:27.880687    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:28.379562    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:28.891319    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:29.390357    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:29.887664    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:30.393546    7436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 16:12:30.520397    7436 kubeadm.go:1107] duration metric: took 14.3028507s to wait for elevateKubeSystemPrivileges
	W0428 16:12:30.520548    7436 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0428 16:12:30.520548    7436 kubeadm.go:393] duration metric: took 27.4358944s to StartCluster
	I0428 16:12:30.520548    7436 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:12:30.520548    7436 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 16:12:30.521786    7436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:12:30.524000    7436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0428 16:12:30.524000    7436 start.go:234] Will wait 6m0s for node &{Name: IP:172.27.234.130 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 16:12:30.527377    7436 out.go:177] * Verifying Kubernetes components...
	I0428 16:12:30.524000    7436 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0428 16:12:30.524887    7436 config.go:182] Loaded profile config "addons-610300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 16:12:30.532081    7436 addons.go:69] Setting yakd=true in profile "addons-610300"
	I0428 16:12:30.532081    7436 addons.go:69] Setting ingress-dns=true in profile "addons-610300"
	I0428 16:12:30.532081    7436 addons.go:69] Setting gcp-auth=true in profile "addons-610300"
	I0428 16:12:30.532081    7436 addons.go:234] Setting addon yakd=true in "addons-610300"
	I0428 16:12:30.532081    7436 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-610300"
	I0428 16:12:30.532081    7436 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-610300"
	I0428 16:12:30.532081    7436 mustload.go:65] Loading cluster: addons-610300
	I0428 16:12:30.532081    7436 host.go:66] Checking if "addons-610300" exists ...
	I0428 16:12:30.532081    7436 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-610300"
	I0428 16:12:30.532081    7436 addons.go:69] Setting registry=true in profile "addons-610300"
	I0428 16:12:30.532081    7436 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-610300"
	I0428 16:12:30.532081    7436 addons.go:234] Setting addon registry=true in "addons-610300"
	I0428 16:12:30.532644    7436 addons.go:69] Setting default-storageclass=true in profile "addons-610300"
	I0428 16:12:30.532785    7436 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-610300"
	I0428 16:12:30.532820    7436 host.go:66] Checking if "addons-610300" exists ...
	I0428 16:12:30.532081    7436 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-610300"
	I0428 16:12:30.532820    7436 host.go:66] Checking if "addons-610300" exists ...
	I0428 16:12:30.532081    7436 host.go:66] Checking if "addons-610300" exists ...
	I0428 16:12:30.532081    7436 addons.go:234] Setting addon ingress-dns=true in "addons-610300"
	I0428 16:12:30.532820    7436 host.go:66] Checking if "addons-610300" exists ...
	I0428 16:12:30.532820    7436 config.go:182] Loaded profile config "addons-610300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 16:12:30.532644    7436 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-610300"
	I0428 16:12:30.532081    7436 addons.go:69] Setting helm-tiller=true in profile "addons-610300"
	I0428 16:12:30.533945    7436 addons.go:234] Setting addon helm-tiller=true in "addons-610300"
	I0428 16:12:30.534526    7436 host.go:66] Checking if "addons-610300" exists ...
	I0428 16:12:30.532081    7436 addons.go:69] Setting ingress=true in profile "addons-610300"
	I0428 16:12:30.534602    7436 addons.go:234] Setting addon ingress=true in "addons-610300"
	I0428 16:12:30.534793    7436 host.go:66] Checking if "addons-610300" exists ...
	I0428 16:12:30.532081    7436 addons.go:69] Setting inspektor-gadget=true in profile "addons-610300"
	I0428 16:12:30.534793    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:30.534793    7436 addons.go:234] Setting addon inspektor-gadget=true in "addons-610300"
	I0428 16:12:30.532081    7436 addons.go:69] Setting cloud-spanner=true in profile "addons-610300"
	I0428 16:12:30.535508    7436 addons.go:234] Setting addon cloud-spanner=true in "addons-610300"
	I0428 16:12:30.535825    7436 host.go:66] Checking if "addons-610300" exists ...
	I0428 16:12:30.535882    7436 host.go:66] Checking if "addons-610300" exists ...
	I0428 16:12:30.532081    7436 addons.go:69] Setting volumesnapshots=true in profile "addons-610300"
	I0428 16:12:30.536032    7436 addons.go:234] Setting addon volumesnapshots=true in "addons-610300"
	I0428 16:12:30.536228    7436 host.go:66] Checking if "addons-610300" exists ...
	I0428 16:12:30.532081    7436 addons.go:69] Setting metrics-server=true in profile "addons-610300"
	I0428 16:12:30.536457    7436 addons.go:234] Setting addon metrics-server=true in "addons-610300"
	I0428 16:12:30.532081    7436 addons.go:69] Setting storage-provisioner=true in profile "addons-610300"
	I0428 16:12:30.536712    7436 addons.go:234] Setting addon storage-provisioner=true in "addons-610300"
	I0428 16:12:30.537573    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:30.537573    7436 host.go:66] Checking if "addons-610300" exists ...
	I0428 16:12:30.538255    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:30.538255    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:30.538753    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:30.539465    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:30.539774    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:30.539774    7436 host.go:66] Checking if "addons-610300" exists ...
	I0428 16:12:30.540747    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:30.540747    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:30.541722    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:30.541722    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:30.542490    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:30.543709    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:30.543709    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:30.544897    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:30.554994    7436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 16:12:31.381550    7436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0428 16:12:31.750347    7436 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.1953518s)
	I0428 16:12:31.776049    7436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 16:12:33.546467    7436 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.1649153s)
	I0428 16:12:33.546467    7436 start.go:946] {"host.minikube.internal": 172.27.224.1} host record injected into CoreDNS's ConfigMap
	I0428 16:12:33.553030    7436 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.7768176s)
	I0428 16:12:33.553030    7436 node_ready.go:35] waiting up to 6m0s for node "addons-610300" to be "Ready" ...
	I0428 16:12:33.621790    7436 node_ready.go:49] node "addons-610300" has status "Ready":"True"
	I0428 16:12:33.621790    7436 node_ready.go:38] duration metric: took 68.7603ms for node "addons-610300" to be "Ready" ...
	I0428 16:12:33.621790    7436 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 16:12:33.704895    7436 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-chb7x" in "kube-system" namespace to be "Ready" ...
	I0428 16:12:34.111619    7436 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-610300" context rescaled to 1 replicas
	I0428 16:12:35.827477    7436 pod_ready.go:102] pod "coredns-7db6d8ff4d-chb7x" in "kube-system" namespace has status "Ready":"False"
	I0428 16:12:36.377767    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:36.377767    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:36.386230    7436 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0428 16:12:36.395426    7436 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0428 16:12:36.395426    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0428 16:12:36.395426    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:36.393922    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:36.395426    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:36.404855    7436 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0428 16:12:36.412218    7436 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0428 16:12:36.413072    7436 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0428 16:12:36.413072    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:36.424416    7436 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0428 16:12:36.424416    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:36.439968    7436 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0428 16:12:36.435292    7436 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0428 16:12:36.443887    7436 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0428 16:12:36.446231    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0428 16:12:36.446231    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:36.449471    7436 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0428 16:12:36.453865    7436 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0428 16:12:36.454578    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:36.456275    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:36.466260    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:36.469862    7436 out.go:177]   - Using image docker.io/registry:2.8.3
	I0428 16:12:36.466663    7436 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0428 16:12:36.466663    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:36.469862    7436 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 16:12:36.476327    7436 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 16:12:36.476327    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0428 16:12:36.476327    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:36.483449    7436 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0428 16:12:36.483449    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0428 16:12:36.483449    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:36.489205    7436 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0428 16:12:36.500119    7436 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0428 16:12:36.500119    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0428 16:12:36.500119    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:36.538079    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:36.538079    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:36.538079    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:36.578560    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:36.566224    7436 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0428 16:12:36.604804    7436 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0428 16:12:36.604804    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:36.620945    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:36.620945    7436 host.go:66] Checking if "addons-610300" exists ...
	I0428 16:12:36.639700    7436 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0428 16:12:36.639700    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0428 16:12:36.639700    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:36.639700    7436 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0428 16:12:36.639700    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0428 16:12:36.639700    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:36.680134    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:36.680134    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:36.726453    7436 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0428 16:12:36.759930    7436 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0428 16:12:36.759930    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0428 16:12:36.759930    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:37.066493    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:37.066493    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:37.066493    7436 addons.go:234] Setting addon default-storageclass=true in "addons-610300"
	I0428 16:12:37.066493    7436 host.go:66] Checking if "addons-610300" exists ...
	I0428 16:12:37.082589    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:37.100397    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:37.100397    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:37.102449    7436 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-610300"
	I0428 16:12:37.102449    7436 host.go:66] Checking if "addons-610300" exists ...
	I0428 16:12:37.106284    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:37.615520    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:37.615520    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:37.626349    7436 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0428 16:12:37.635614    7436 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0428 16:12:37.635614    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0428 16:12:37.635614    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:37.771745    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:37.771745    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:37.779088    7436 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0428 16:12:37.804700    7436 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0428 16:12:37.804700    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0428 16:12:37.804700    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:37.865472    7436 pod_ready.go:102] pod "coredns-7db6d8ff4d-chb7x" in "kube-system" namespace has status "Ready":"False"
	I0428 16:12:38.060768    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:38.060768    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:38.087556    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:38.101863    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:38.117893    7436 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0428 16:12:38.102449    7436 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0428 16:12:38.153338    7436 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0428 16:12:38.153338    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0428 16:12:38.153338    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:38.177238    7436 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0428 16:12:38.193617    7436 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0428 16:12:38.196451    7436 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0428 16:12:38.196451    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0428 16:12:38.196451    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:40.568893    7436 pod_ready.go:102] pod "coredns-7db6d8ff4d-chb7x" in "kube-system" namespace has status "Ready":"False"
	I0428 16:12:41.588942    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:41.588942    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:41.588942    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:12:41.611501    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:41.611501    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:41.611501    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:12:41.759061    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:41.787816    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:41.787816    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:12:41.893494    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:41.898654    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:41.898874    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:12:42.032139    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:42.032139    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:42.032139    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:12:42.071882    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:42.071882    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:42.071882    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:12:42.128217    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:42.128217    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:42.128217    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:12:42.147285    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:42.147285    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:42.147285    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:12:42.592432    7436 pod_ready.go:102] pod "coredns-7db6d8ff4d-chb7x" in "kube-system" namespace has status "Ready":"False"
	I0428 16:12:43.074713    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:43.074713    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:43.083347    7436 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0428 16:12:43.104802    7436 out.go:177]   - Using image docker.io/busybox:stable
	I0428 16:12:43.117998    7436 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0428 16:12:43.118101    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0428 16:12:43.119755    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:43.298780    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:43.298780    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:43.298780    7436 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0428 16:12:43.298780    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0428 16:12:43.298780    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:44.516715    7436 pod_ready.go:92] pod "coredns-7db6d8ff4d-chb7x" in "kube-system" namespace has status "Ready":"True"
	I0428 16:12:44.516715    7436 pod_ready.go:81] duration metric: took 10.811808s for pod "coredns-7db6d8ff4d-chb7x" in "kube-system" namespace to be "Ready" ...
	I0428 16:12:44.516715    7436 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-k7zsx" in "kube-system" namespace to be "Ready" ...
	I0428 16:12:44.516715    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:44.516715    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:44.516715    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:12:44.848458    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:44.848458    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:44.848458    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:12:44.910213    7436 pod_ready.go:92] pod "coredns-7db6d8ff4d-k7zsx" in "kube-system" namespace has status "Ready":"True"
	I0428 16:12:44.910213    7436 pod_ready.go:81] duration metric: took 393.497ms for pod "coredns-7db6d8ff4d-k7zsx" in "kube-system" namespace to be "Ready" ...
	I0428 16:12:44.910213    7436 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-610300" in "kube-system" namespace to be "Ready" ...
	I0428 16:12:45.145198    7436 pod_ready.go:92] pod "etcd-addons-610300" in "kube-system" namespace has status "Ready":"True"
	I0428 16:12:45.145198    7436 pod_ready.go:81] duration metric: took 234.9848ms for pod "etcd-addons-610300" in "kube-system" namespace to be "Ready" ...
	I0428 16:12:45.145198    7436 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-610300" in "kube-system" namespace to be "Ready" ...
	I0428 16:12:45.472487    7436 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0428 16:12:45.472487    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:45.473569    7436 pod_ready.go:92] pod "kube-apiserver-addons-610300" in "kube-system" namespace has status "Ready":"True"
	I0428 16:12:45.473569    7436 pod_ready.go:81] duration metric: took 328.3712ms for pod "kube-apiserver-addons-610300" in "kube-system" namespace to be "Ready" ...
	I0428 16:12:45.473569    7436 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-610300" in "kube-system" namespace to be "Ready" ...
	I0428 16:12:45.501985    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:45.501985    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:45.501985    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:12:45.501985    7436 pod_ready.go:92] pod "kube-controller-manager-addons-610300" in "kube-system" namespace has status "Ready":"True"
	I0428 16:12:45.501985    7436 pod_ready.go:81] duration metric: took 28.4151ms for pod "kube-controller-manager-addons-610300" in "kube-system" namespace to be "Ready" ...
	I0428 16:12:45.501985    7436 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gv7gk" in "kube-system" namespace to be "Ready" ...
	I0428 16:12:45.539734    7436 pod_ready.go:92] pod "kube-proxy-gv7gk" in "kube-system" namespace has status "Ready":"True"
	I0428 16:12:45.539734    7436 pod_ready.go:81] duration metric: took 37.7493ms for pod "kube-proxy-gv7gk" in "kube-system" namespace to be "Ready" ...
	I0428 16:12:45.539734    7436 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-610300" in "kube-system" namespace to be "Ready" ...
	I0428 16:12:45.644908    7436 pod_ready.go:92] pod "kube-scheduler-addons-610300" in "kube-system" namespace has status "Ready":"True"
	I0428 16:12:45.644908    7436 pod_ready.go:81] duration metric: took 105.1739ms for pod "kube-scheduler-addons-610300" in "kube-system" namespace to be "Ready" ...
	I0428 16:12:45.644908    7436 pod_ready.go:38] duration metric: took 12.0231043s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 16:12:45.644908    7436 api_server.go:52] waiting for apiserver process to appear ...
	I0428 16:12:45.665352    7436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 16:12:45.865896    7436 api_server.go:72] duration metric: took 15.3418794s to wait for apiserver process to appear ...
	I0428 16:12:45.865896    7436 api_server.go:88] waiting for apiserver healthz status ...
	I0428 16:12:45.865896    7436 api_server.go:253] Checking apiserver healthz at https://172.27.234.130:8443/healthz ...
	I0428 16:12:46.118741    7436 api_server.go:279] https://172.27.234.130:8443/healthz returned 200:
	ok
	I0428 16:12:46.201271    7436 api_server.go:141] control plane version: v1.30.0
	I0428 16:12:46.201271    7436 api_server.go:131] duration metric: took 335.3742ms to wait for apiserver health ...
	I0428 16:12:46.201271    7436 system_pods.go:43] waiting for kube-system pods to appear ...
	I0428 16:12:46.265369    7436 system_pods.go:59] 7 kube-system pods found
	I0428 16:12:46.265369    7436 system_pods.go:61] "coredns-7db6d8ff4d-chb7x" [bc022019-3db9-437a-b1e0-c4cc1e553826] Running
	I0428 16:12:46.265369    7436 system_pods.go:61] "coredns-7db6d8ff4d-k7zsx" [6d7f33fa-9e7f-4069-97e2-8bdc46e2dd65] Running
	I0428 16:12:46.265369    7436 system_pods.go:61] "etcd-addons-610300" [21b9a806-b0ec-4cfc-80dd-b75209f1b5ed] Running
	I0428 16:12:46.265369    7436 system_pods.go:61] "kube-apiserver-addons-610300" [b489059a-6a8b-424c-903a-9df0501bb37f] Running
	I0428 16:12:46.265369    7436 system_pods.go:61] "kube-controller-manager-addons-610300" [050c31a8-8ebe-4bf5-a699-1fd8849dc3b6] Running
	I0428 16:12:46.265369    7436 system_pods.go:61] "kube-proxy-gv7gk" [3eac6bd9-4578-4c66-a36d-6abeb1f4bf95] Running
	I0428 16:12:46.265369    7436 system_pods.go:61] "kube-scheduler-addons-610300" [0c80e4f7-eab6-49d8-9672-db6d35d1ad4c] Running
	I0428 16:12:46.265369    7436 system_pods.go:74] duration metric: took 64.098ms to wait for pod list to return data ...
	I0428 16:12:46.265369    7436 default_sa.go:34] waiting for default service account to be created ...
	I0428 16:12:46.290342    7436 default_sa.go:45] found service account: "default"
	I0428 16:12:46.290342    7436 default_sa.go:55] duration metric: took 24.9728ms for default service account to be created ...
	I0428 16:12:46.290342    7436 system_pods.go:116] waiting for k8s-apps to be running ...
	I0428 16:12:46.342200    7436 system_pods.go:86] 7 kube-system pods found
	I0428 16:12:46.342200    7436 system_pods.go:89] "coredns-7db6d8ff4d-chb7x" [bc022019-3db9-437a-b1e0-c4cc1e553826] Running
	I0428 16:12:46.342200    7436 system_pods.go:89] "coredns-7db6d8ff4d-k7zsx" [6d7f33fa-9e7f-4069-97e2-8bdc46e2dd65] Running
	I0428 16:12:46.342200    7436 system_pods.go:89] "etcd-addons-610300" [21b9a806-b0ec-4cfc-80dd-b75209f1b5ed] Running
	I0428 16:12:46.342200    7436 system_pods.go:89] "kube-apiserver-addons-610300" [b489059a-6a8b-424c-903a-9df0501bb37f] Running
	I0428 16:12:46.342200    7436 system_pods.go:89] "kube-controller-manager-addons-610300" [050c31a8-8ebe-4bf5-a699-1fd8849dc3b6] Running
	I0428 16:12:46.342200    7436 system_pods.go:89] "kube-proxy-gv7gk" [3eac6bd9-4578-4c66-a36d-6abeb1f4bf95] Running
	I0428 16:12:46.342200    7436 system_pods.go:89] "kube-scheduler-addons-610300" [0c80e4f7-eab6-49d8-9672-db6d35d1ad4c] Running
	I0428 16:12:46.342200    7436 system_pods.go:126] duration metric: took 51.8584ms to wait for k8s-apps to be running ...
	I0428 16:12:46.342200    7436 system_svc.go:44] waiting for kubelet service to be running ....
	I0428 16:12:46.376985    7436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 16:12:46.481641    7436 system_svc.go:56] duration metric: took 134.081ms WaitForService to wait for kubelet
	I0428 16:12:46.482125    7436 kubeadm.go:576] duration metric: took 15.9576235s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 16:12:46.482205    7436 node_conditions.go:102] verifying NodePressure condition ...
	I0428 16:12:46.494679    7436 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 16:12:46.494679    7436 node_conditions.go:123] node cpu capacity is 2
	I0428 16:12:46.494679    7436 node_conditions.go:105] duration metric: took 12.474ms to run NodePressure ...
	I0428 16:12:46.494679    7436 start.go:240] waiting for startup goroutines ...
	I0428 16:12:46.891327    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:46.891327    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:46.891327    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:12:48.009393    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:12:48.009393    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:48.009393    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:12:48.076189    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:12:48.076189    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:48.076189    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:12:48.142817    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:12:48.142817    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:48.142817    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:12:48.255460    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:12:48.255693    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:48.256028    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:12:48.383755    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:12:48.383755    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:48.388375    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:12:48.477294    7436 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0428 16:12:48.477294    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0428 16:12:48.485352    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:12:48.485624    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:48.486945    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:12:48.559533    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:12:48.559533    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:48.565098    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:12:48.674376    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:12:48.674376    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:48.674467    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:12:48.703537    7436 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0428 16:12:48.703748    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0428 16:12:48.714159    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:48.714297    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:48.714408    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:12:48.719493    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:48.719493    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:48.719493    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:12:48.853250    7436 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0428 16:12:48.853250    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0428 16:12:48.861843    7436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 16:12:48.882289    7436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0428 16:12:49.024799    7436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0428 16:12:49.075337    7436 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0428 16:12:49.075414    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0428 16:12:49.170993    7436 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0428 16:12:49.170993    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0428 16:12:49.172148    7436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0428 16:12:49.256765    7436 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0428 16:12:49.256765    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0428 16:12:49.441694    7436 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0428 16:12:49.441855    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0428 16:12:49.448375    7436 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0428 16:12:49.448375    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0428 16:12:49.523298    7436 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0428 16:12:49.523414    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0428 16:12:49.536250    7436 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0428 16:12:49.536388    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0428 16:12:49.627575    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:49.627575    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:49.627705    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:12:49.752652    7436 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0428 16:12:49.752701    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0428 16:12:49.756883    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:12:49.756928    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:49.757187    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:12:49.822323    7436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0428 16:12:49.846734    7436 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0428 16:12:49.846910    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0428 16:12:49.902359    7436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0428 16:12:50.087720    7436 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0428 16:12:50.087846    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0428 16:12:50.103464    7436 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0428 16:12:50.103464    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0428 16:12:50.222263    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:12:50.222328    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:50.222626    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:12:50.242594    7436 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0428 16:12:50.242594    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0428 16:12:50.316128    7436 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0428 16:12:50.316253    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0428 16:12:50.475930    7436 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0428 16:12:50.476001    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0428 16:12:50.533464    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:12:50.533574    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:50.533987    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:12:50.599877    7436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0428 16:12:50.645212    7436 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0428 16:12:50.645276    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0428 16:12:50.739073    7436 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0428 16:12:50.739073    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0428 16:12:50.954023    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:12:50.954083    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:50.954331    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:12:51.084422    7436 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0428 16:12:51.084535    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0428 16:12:51.105054    7436 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0428 16:12:51.105150    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0428 16:12:51.170372    7436 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0428 16:12:51.170432    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0428 16:12:51.172932    7436 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0428 16:12:51.173012    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0428 16:12:51.295035    7436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0428 16:12:51.341354    7436 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0428 16:12:51.341502    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0428 16:12:51.375895    7436 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0428 16:12:51.376033    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0428 16:12:51.402477    7436 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0428 16:12:51.402609    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0428 16:12:51.428450    7436 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0428 16:12:51.428583    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0428 16:12:51.681122    7436 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0428 16:12:51.681175    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0428 16:12:51.728668    7436 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0428 16:12:51.728793    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0428 16:12:51.789219    7436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0428 16:12:51.795406    7436 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0428 16:12:51.795406    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0428 16:12:51.918352    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:12:51.918352    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:51.918954    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:12:51.971649    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:12:51.971649    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:51.976722    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:12:52.004913    7436 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0428 16:12:52.004913    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0428 16:12:52.080782    7436 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0428 16:12:52.080782    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0428 16:12:52.161295    7436 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0428 16:12:52.161360    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0428 16:12:52.391198    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:12:52.391198    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:52.391613    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:12:52.501536    7436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0428 16:12:52.523526    7436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0428 16:12:52.660559    7436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0428 16:12:52.741696    7436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.8594028s)
	I0428 16:12:52.741773    7436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.7169697s)
	I0428 16:12:52.741773    7436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.879925s)
	I0428 16:12:52.741773    7436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.5696207s)
	I0428 16:12:52.956299    7436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0428 16:12:52.993052    7436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0428 16:12:53.566675    7436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.7442315s)
	I0428 16:12:53.566764    7436 addons.go:470] Verifying addon metrics-server=true in "addons-610300"
	I0428 16:12:53.705335    7436 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0428 16:12:53.800661    7436 addons.go:234] Setting addon gcp-auth=true in "addons-610300"
	I0428 16:12:53.800854    7436 host.go:66] Checking if "addons-610300" exists ...
	I0428 16:12:53.802168    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:54.762885    7436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.8605201s)
	I0428 16:12:54.763061    7436 addons.go:470] Verifying addon registry=true in "addons-610300"
	I0428 16:12:54.769506    7436 out.go:177] * Verifying registry addon...
	I0428 16:12:54.772738    7436 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0428 16:12:54.788754    7436 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0428 16:12:54.788828    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:12:55.293888    7436 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0428 16:12:55.293959    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:12:55.800164    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:12:56.018096    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:56.019792    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:56.034608    7436 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0428 16:12:56.034608    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-610300 ).state
	I0428 16:12:56.307980    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:12:56.794980    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:12:57.202482    7436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.6024968s)
	I0428 16:12:57.205442    7436 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-610300 service yakd-dashboard -n yakd-dashboard
	
	I0428 16:12:57.318311    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:12:57.784620    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:12:58.231325    7436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:12:58.231325    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:12:58.231325    7436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-610300 ).networkadapters[0]).ipaddresses[0]
	I0428 16:12:58.322943    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:12:58.806276    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:12:59.296376    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:12:59.795846    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:00.335991    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:00.843026    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:00.854001    7436 main.go:141] libmachine: [stdout =====>] : 172.27.234.130
	
	I0428 16:13:00.854189    7436 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:13:00.854401    7436 sshutil.go:53] new ssh client: &{IP:172.27.234.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-610300\id_rsa Username:docker}
	I0428 16:13:01.297033    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:01.790226    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:02.129791    7436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.8347443s)
	I0428 16:13:02.129791    7436 addons.go:470] Verifying addon ingress=true in "addons-610300"
	I0428 16:13:02.136396    7436 out.go:177] * Verifying ingress addon...
	I0428 16:13:02.142017    7436 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0428 16:13:02.153246    7436 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0428 16:13:02.153310    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:02.285163    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:02.650305    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:02.815064    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:03.166049    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:03.298233    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:03.559394    7436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.0577887s)
	I0428 16:13:03.559428    7436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (11.0358891s)
	W0428 16:13:03.559428    7436 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0428 16:13:03.559556    7436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (10.8989846s)
	I0428 16:13:03.559556    7436 retry.go:31] will retry after 237.523097ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0428 16:13:03.559679    7436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.5666159s)
	I0428 16:13:03.559679    7436 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (7.5250632s)
	I0428 16:13:03.559679    7436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.6033683s)
	I0428 16:13:03.562196    7436 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0428 16:13:03.559920    7436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (11.770688s)
	I0428 16:13:03.562196    7436 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-610300"
	I0428 16:13:03.566656    7436 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0428 16:13:03.569837    7436 out.go:177] * Verifying csi-hostpath-driver addon...
	I0428 16:13:03.570822    7436 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0428 16:13:03.570822    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0428 16:13:03.576182    7436 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0428 16:13:03.625120    7436 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0428 16:13:03.625120    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	W0428 16:13:03.656619    7436 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I0428 16:13:03.684540    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:03.688744    7436 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0428 16:13:03.688744    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:03.708753    7436 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0428 16:13:03.708753    7436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0428 16:13:03.800883    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:03.801375    7436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0428 16:13:03.806498    7436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0428 16:13:04.092363    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:04.146499    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:04.277339    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:04.587857    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:04.654010    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:04.789458    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:05.089738    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:05.147334    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:05.294473    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:05.594008    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:05.667971    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:05.835122    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:05.969806    7436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.1684282s)
	I0428 16:13:05.977585    7436 addons.go:470] Verifying addon gcp-auth=true in "addons-610300"
	I0428 16:13:05.990381    7436 out.go:177] * Verifying gcp-auth addon...
	I0428 16:13:06.014233    7436 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0428 16:13:06.022029    7436 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0428 16:13:06.022029    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:06.088538    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:06.180168    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:06.300053    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:06.376411    7436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.5699097s)
	I0428 16:13:06.538074    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:06.610334    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:06.656134    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:06.792286    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:07.037787    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:07.095973    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:07.150943    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:07.300271    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:07.542882    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:07.597515    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:07.655264    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:07.783096    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:08.033760    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:08.098133    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:08.154436    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:08.300636    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:08.534845    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:08.593765    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:08.649672    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:08.799080    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:09.033081    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:09.105692    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:09.164075    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:09.295416    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:09.521710    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:09.598571    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:09.658742    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:09.790899    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:10.033422    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:10.096455    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:10.169521    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:10.295032    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:10.522419    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:10.601631    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:10.659525    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:10.799418    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:11.026969    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:11.091784    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:11.150210    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:11.296262    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:11.532564    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:11.594007    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:11.648724    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:11.787959    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:12.032418    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:12.102017    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:12.161454    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:12.294700    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:12.542441    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:12.590570    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:12.649776    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:12.782022    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:13.031534    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:13.086615    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:13.167563    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:13.292379    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:13.525856    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:13.606958    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:13.662655    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:13.788904    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:14.020327    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:14.090728    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:14.157724    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:14.297062    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:14.535074    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:14.599795    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:14.653209    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:14.795018    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:15.027458    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:15.094785    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:15.164401    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:15.300077    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:15.533130    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:15.596015    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:15.662912    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:15.801439    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:16.030184    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:16.095321    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:16.154456    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:16.293629    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:16.526463    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:16.601261    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:16.656307    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:16.790032    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:17.030059    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:17.086736    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:17.148895    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:17.282408    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:17.536705    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:17.598909    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:17.660159    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:17.787781    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:18.020637    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:18.098639    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:18.156958    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:18.290547    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:18.526084    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:18.599774    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:18.655593    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:18.796242    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:19.027062    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:19.104294    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:19.159485    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:19.280846    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:19.536866    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:19.598588    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:19.922774    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:19.928268    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:20.037192    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:20.092836    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:20.160002    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:20.382899    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:20.727159    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:20.728332    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:20.730508    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:21.225825    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:21.226706    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:21.226706    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:21.231293    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:21.291475    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:21.539854    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:21.602828    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:21.654474    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:21.792423    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:22.034988    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:22.092698    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:22.165922    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:22.292916    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:22.537772    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:22.601289    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:22.678571    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:22.790342    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:23.094454    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:23.132637    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:23.160616    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:23.296000    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:23.520275    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:23.596309    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:23.664478    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:23.789097    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:24.028910    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:24.099547    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:24.162879    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:24.297104    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:24.529901    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:24.597243    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:24.665372    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:24.795740    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:25.032610    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:25.090486    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:25.169213    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:25.290218    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:25.521597    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:25.600949    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:25.649011    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:25.787580    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:26.035430    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:26.089039    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:26.165552    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:26.292529    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:26.520268    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:26.608302    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:26.649302    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:26.790817    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:27.023760    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:27.102085    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:27.276084    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:27.284168    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:27.524392    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:27.585233    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:27.670761    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:27.793202    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:28.031135    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:28.091636    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:28.167839    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:28.293974    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:28.522165    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:28.592993    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:28.652284    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:28.794330    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:29.028784    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:29.092265    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:29.166253    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:29.295084    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:29.520405    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:29.586543    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:29.664520    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:29.790527    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:30.023426    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:30.087730    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:30.165096    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:30.288371    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:30.527317    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:30.599975    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:30.657929    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:30.784514    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:31.025210    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:31.102718    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:31.158686    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:31.293054    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:31.539641    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:31.601864    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:31.657801    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:31.790885    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:32.038943    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:32.086406    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:32.167272    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:32.298266    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:32.532194    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:32.596776    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:32.666125    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:32.799275    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:33.130860    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:33.132502    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:33.160607    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:33.300611    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:33.531424    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:33.597745    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:33.665328    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:33.787383    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:34.032513    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:34.101964    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:34.151119    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:34.293188    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:34.530579    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:34.605151    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:34.660376    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:34.879091    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:35.227961    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:35.228286    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:35.231566    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:35.662613    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:35.664586    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:35.665752    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:35.670482    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:35.790270    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:36.033441    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:36.098979    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:36.156744    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:36.285602    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:36.530248    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:36.587510    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:36.661563    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:36.788603    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:37.040489    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:37.090500    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:37.165942    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:37.280540    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:37.526717    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:37.586290    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:37.650108    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:39.620446    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:39.624236    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:39.625319    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:39.630442    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:39.825388    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:39.835767    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:39.836145    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:39.837977    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:39.843950    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:39.859224    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:40.023184    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:40.087919    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:40.158810    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:40.311685    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:40.528973    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:40.587690    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:40.665274    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:40.784521    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:41.026971    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:41.086057    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:41.154293    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:41.308834    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:41.533876    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:41.596768    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:41.652212    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:41.824964    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:42.031024    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:42.096963    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:42.163846    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:42.293364    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:42.530928    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:42.612034    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:42.652324    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:42.796614    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:43.026405    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:43.088732    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:43.150302    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:43.286829    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:43.544531    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:43.606151    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:43.653553    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:43.782602    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:44.027868    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:44.093743    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:44.154385    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:44.292243    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:44.520645    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:44.596578    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:44.651766    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:44.782832    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:45.034930    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:45.089594    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:45.153635    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:45.282049    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:45.522563    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:45.600792    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:45.651193    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:45.781291    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:46.025044    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:46.088966    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:46.159045    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:46.284040    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:46.527530    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:46.586779    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:46.671417    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:46.794693    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:48.291161    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:48.293879    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:48.294559    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:48.299072    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:48.450166    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:48.450365    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:48.450569    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:48.456890    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:48.536931    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:48.587347    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:48.663114    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:48.791630    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:49.034346    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:49.084560    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:49.154653    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:49.287266    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:49.534670    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:49.587228    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:49.660656    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:49.792347    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:50.030471    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:50.085963    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:50.168605    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:50.301763    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:50.529225    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:50.595210    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:50.658445    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:50.782056    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:51.035036    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:51.095813    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:51.154803    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:51.288653    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:51.547460    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:51.590392    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:51.664076    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:51.804966    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:52.039153    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:52.091099    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:52.156656    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:52.283482    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:52.531415    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:52.597096    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:52.649824    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:52.784993    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:53.019827    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:53.095572    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:53.149256    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:53.296383    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:53.529992    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:53.587818    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:53.662351    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:53.786154    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:54.034152    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:54.752829    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:54.758975    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:54.761006    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:54.761822    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:54.767986    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:54.771348    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:54.793320    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:55.035705    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:55.090663    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:55.158946    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:55.293336    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:55.526202    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:55.643408    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:55.659411    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:55.791511    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:56.036354    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:56.116056    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:56.154381    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:56.284120    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:56.534553    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:56.587018    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:56.654659    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:56.793335    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:57.024978    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:57.087298    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:57.167458    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:57.476146    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:57.519837    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:57.601626    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:57.672600    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:57.792392    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:58.021341    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:58.103184    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:58.151119    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:58.299455    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:58.530186    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:58.598516    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:58.659406    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:58.808431    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:59.033830    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:59.103823    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:59.151100    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:59.295638    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:13:59.519795    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:13:59.595550    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:13:59.651425    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:13:59.791822    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:00.024681    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:00.089249    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:00.165317    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:00.295394    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:00.520428    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:00.598348    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:00.651580    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:00.791201    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:01.034592    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:01.097790    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:01.153726    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:01.297587    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:01.519901    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:01.606585    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:01.663069    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:01.794370    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:02.027509    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:02.086544    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:02.166295    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:02.293589    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:02.538788    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:02.585604    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:02.663810    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:02.787532    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:03.023773    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:03.083858    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:03.167689    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:03.300440    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:03.522158    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:03.590370    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:03.669872    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:03.792878    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:04.036667    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:04.089533    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:04.166292    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:04.283286    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:04.527832    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:04.585047    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:04.662953    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:04.806619    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:05.028855    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:05.100628    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:05.162844    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:05.294365    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:05.526460    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:05.601264    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:05.653152    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:05.797695    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:06.167014    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:06.167950    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:06.171197    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:06.529084    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:06.531252    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:06.596011    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:06.649765    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:06.788295    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:07.023895    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:07.109275    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:07.164228    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:07.291858    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:07.531962    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:07.595024    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:07.653948    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:07.793281    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:08.027716    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:08.088271    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:08.154836    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:08.298135    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:08.527123    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:08.877979    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:08.879468    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:08.881440    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:09.569595    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:09.570960    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:09.580413    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:09.583864    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:09.840859    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:09.841623    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:09.842033    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:09.849787    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:10.419488    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:10.424244    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:10.425918    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:10.431029    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:10.530345    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:10.595779    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:10.667647    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:10.799943    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 16:14:11.023721    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:11.084562    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:11.168350    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:11.282822    7436 kapi.go:107] duration metric: took 1m16.5099989s to wait for kubernetes.io/minikube-addons=registry ...
	I0428 16:14:11.583396    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:11.591118    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:11.665293    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:12.024817    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:12.098702    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:12.154122    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:12.533183    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:12.590357    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:12.652566    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:13.036373    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:13.101451    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:13.158453    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:13.531184    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:13.595916    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:13.650612    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:14.026951    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:14.105674    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:14.168619    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:14.527176    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:14.591825    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:14.668474    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:15.025270    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:15.099397    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:15.154991    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:15.535915    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:15.598883    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:15.653119    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:16.023141    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:16.090823    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:16.172255    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:16.521218    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:16.586667    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:16.877027    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:17.023466    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:17.107472    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:17.161128    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:17.520881    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:17.603592    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:17.657597    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:18.028680    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:18.093438    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:18.172317    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:18.519835    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:18.586829    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:18.663425    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:19.039521    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:19.089593    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:19.160209    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:19.730792    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:19.731892    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:19.732323    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:20.050943    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:20.105303    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:20.166544    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:20.525381    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:20.604480    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:20.659704    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:21.023630    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:21.095672    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:21.150654    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:21.521937    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:21.598997    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:21.656034    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:22.021106    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:22.098923    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:22.161581    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:22.530423    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:22.585361    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:22.665370    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:23.029714    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:23.105009    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:23.149657    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:23.534494    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:23.587471    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:23.661985    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:24.127450    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:24.132532    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:24.149515    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:24.537492    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:24.593460    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:24.678151    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:25.029945    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:25.083019    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:25.151404    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:25.520637    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:25.586184    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:25.656360    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:26.023895    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:26.091044    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:26.165678    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:26.535502    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:26.594068    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:26.653071    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:27.039927    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:27.122767    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:27.149360    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:27.520513    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:27.598701    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:27.652533    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:28.032453    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:28.089524    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:28.173108    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:28.532486    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:28.589256    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:28.668990    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:29.027271    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:29.084487    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:29.166445    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:29.522656    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:29.588976    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:29.663941    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:30.033748    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:30.085959    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:30.168028    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:30.529561    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:30.593708    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:30.649696    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:31.025060    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:31.087830    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:31.157305    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:31.524692    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:31.588913    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:31.649013    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:32.027113    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:32.090402    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:32.161155    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:32.536398    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:32.595144    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:32.657875    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:33.029541    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:33.086298    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:33.165837    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:33.520107    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:33.596020    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:33.655053    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:34.041886    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:34.102987    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:34.147780    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:34.528466    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:34.587302    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:34.657799    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:35.469020    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:35.470458    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:35.473628    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:35.537481    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:35.633150    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:35.650333    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:36.034119    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:36.089545    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:36.163875    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:36.887955    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:36.888557    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:36.889231    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:37.027262    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:37.098619    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:37.159161    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:37.527655    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:37.601506    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:37.658938    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:38.020764    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:38.098076    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:38.156080    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:38.569014    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:38.603576    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:38.658483    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:39.023499    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:39.085535    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:39.163619    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:39.545650    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:39.595044    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:39.658500    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:40.026490    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:40.097233    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:40.161697    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:40.526376    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:40.598585    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:40.667880    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:41.035736    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:41.085694    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:41.163321    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:41.545864    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:41.593707    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:41.663882    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:42.026730    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:42.092551    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:42.155048    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:42.539933    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:42.603835    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:42.667568    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:43.329461    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:43.337922    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:43.339120    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:43.528374    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:43.589449    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:43.659789    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:44.028442    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:44.102230    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:44.152698    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:44.529056    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:44.606368    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:44.648221    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:45.023017    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:45.099928    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:45.156871    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:45.524262    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:45.584176    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:45.653032    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:46.020034    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:46.085377    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:46.151506    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:46.525332    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:46.600802    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:46.670291    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:47.026664    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:47.097461    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:47.156606    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:47.539665    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:47.597324    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:47.648712    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:48.020401    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:48.087076    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:48.152775    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:48.533406    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:48.597279    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:48.656232    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:49.022568    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:49.101569    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:49.165373    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:49.532228    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:49.594891    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:49.664150    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:50.026216    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:50.093240    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:50.233864    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:50.531819    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:50.593034    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:50.655138    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:51.026046    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:51.092498    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:51.161154    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:51.530161    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:51.605869    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:51.662290    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:52.030604    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:52.087440    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:52.153058    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:52.535225    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:52.590371    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:52.649104    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:53.033187    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:53.092392    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:53.161242    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:53.554399    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:53.593732    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:53.674571    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:54.033281    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:54.093257    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:54.166248    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:54.532109    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:54.596150    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:54.651430    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:55.024753    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:55.334927    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:55.338331    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:55.529412    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:55.588301    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:55.683230    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:56.031286    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:56.098441    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:56.152433    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:56.528380    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:56.585133    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:56.666404    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:57.039094    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:57.099906    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:57.152049    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:57.528684    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:57.600161    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:57.650596    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:58.431610    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:58.433798    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:58.434173    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:58.537514    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:58.598305    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:58.663828    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:59.021235    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:59.099385    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:59.164274    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:14:59.533386    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:14:59.599029    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:14:59.652228    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:00.028163    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:00.099605    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:00.148645    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:00.527891    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:00.589178    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:00.663192    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:01.046269    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:01.102342    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:01.164973    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:01.532221    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:01.597362    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:01.667967    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:02.048055    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:02.088787    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:02.159688    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:02.530666    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:02.593659    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:02.655576    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:03.040976    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:03.095079    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:03.151210    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:03.521007    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:03.599686    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:03.661126    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:04.048135    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:04.088131    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:04.160517    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:04.532116    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:04.586947    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:04.656642    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:05.023501    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:05.086641    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:05.160774    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:05.522266    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:05.600262    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:05.663063    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:06.035179    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:06.101360    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:06.154554    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:06.524536    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:06.585157    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:06.652933    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:07.029923    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:07.103440    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:07.168349    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:07.620273    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:07.623374    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:07.670303    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:08.026198    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:08.102522    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:08.156844    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:08.534051    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:08.595990    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:08.650645    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:09.027471    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:09.100185    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:09.164432    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:09.524202    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:09.586386    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:09.655272    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:10.114893    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:10.117794    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:10.183979    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:10.535901    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:10.598532    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:10.648660    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:11.025602    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:11.104269    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:11.148671    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:11.524894    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:11.601763    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:11.664739    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:12.026186    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:12.092119    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:12.171122    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:12.542049    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:12.600829    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:12.658661    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:13.032444    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:13.093683    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:13.170663    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:13.875289    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:13.875844    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:13.881241    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:14.026126    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:14.101327    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:14.152990    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:14.520998    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:14.603435    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:14.657552    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:15.028551    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:15.094452    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:15.153163    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:15.524227    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:15.589650    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:15.659641    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:16.029554    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:16.090653    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:16.149197    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:16.533287    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:16.593928    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:16.664208    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:17.026490    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:17.092940    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:17.157954    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:17.544152    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:17.590461    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:17.666653    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:18.035058    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:18.097161    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:18.157230    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:18.521689    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:18.605474    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 16:15:18.658413    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:19.033778    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:19.086497    7436 kapi.go:107] duration metric: took 2m15.5101655s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0428 16:15:19.161714    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:19.521457    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:19.660404    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:20.114205    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:20.150827    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:20.530264    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:20.665633    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:21.174191    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:21.176980    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:21.540206    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:21.650337    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:22.032801    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:22.167638    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:22.530258    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:22.659111    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:23.031778    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:23.166200    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:23.541515    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:23.652821    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:24.023870    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:24.164260    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:24.533849    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:24.666493    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:25.040404    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:25.167019    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:25.540804    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:25.662687    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:26.027810    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:26.168259    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:26.524624    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:26.651189    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:27.038088    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:27.157924    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:27.528822    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:27.655545    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:28.021414    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:28.152285    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:28.532653    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:28.650658    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:29.034973    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:29.162906    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:29.566910    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:29.874158    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:30.534144    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:30.534144    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:30.538960    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:30.660521    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:31.037378    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:31.172477    7436 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 16:15:31.535319    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:31.663765    7436 kapi.go:107] duration metric: took 2m29.5215833s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0428 16:15:32.034051    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:32.535993    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:33.021379    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:33.534931    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:34.034862    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:34.525679    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:35.034766    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:35.533667    7436 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 16:15:36.038654    7436 kapi.go:107] duration metric: took 2m30.0242884s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0428 16:15:36.041342    7436 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-610300 cluster.
	I0428 16:15:36.043902    7436 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0428 16:15:36.047018    7436 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0428 16:15:36.050459    7436 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, helm-tiller, inspektor-gadget, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0428 16:15:36.055874    7436 addons.go:505] duration metric: took 3m5.5316699s for enable addons: enabled=[ingress-dns cloud-spanner storage-provisioner nvidia-device-plugin metrics-server yakd helm-tiller inspektor-gadget default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0428 16:15:36.055874    7436 start.go:245] waiting for cluster config update ...
	I0428 16:15:36.055874    7436 start.go:254] writing updated cluster config ...
	I0428 16:15:36.070756    7436 ssh_runner.go:195] Run: rm -f paused
	I0428 16:15:36.349408    7436 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0428 16:15:36.355546    7436 out.go:177] * Done! kubectl is now configured to use "addons-610300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 28 23:16:17 addons-610300 dockerd[1323]: time="2024-04-28T23:16:17.585189949Z" level=info msg="ignoring event" container=32da50a780c9ef9f77956d2fb3a32902db7233ec5d1ca9039174e594e6cd6490 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:16:17 addons-610300 cri-dockerd[1228]: time="2024-04-28T23:16:17Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Apr 28 23:16:18 addons-610300 dockerd[1330]: time="2024-04-28T23:16:18.133338437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:16:18 addons-610300 dockerd[1330]: time="2024-04-28T23:16:18.133426937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:16:18 addons-610300 dockerd[1330]: time="2024-04-28T23:16:18.133564138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:16:18 addons-610300 dockerd[1330]: time="2024-04-28T23:16:18.138224255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:16:25 addons-610300 dockerd[1330]: time="2024-04-28T23:16:25.042505554Z" level=info msg="shim disconnected" id=ea135da5ddc836bdfb8b391b22da05a5713fa43220794353f1c275fb749bbb72 namespace=moby
	Apr 28 23:16:25 addons-610300 dockerd[1323]: time="2024-04-28T23:16:25.043900859Z" level=info msg="ignoring event" container=ea135da5ddc836bdfb8b391b22da05a5713fa43220794353f1c275fb749bbb72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:16:25 addons-610300 dockerd[1330]: time="2024-04-28T23:16:25.045082463Z" level=warning msg="cleaning up after shim disconnected" id=ea135da5ddc836bdfb8b391b22da05a5713fa43220794353f1c275fb749bbb72 namespace=moby
	Apr 28 23:16:25 addons-610300 dockerd[1330]: time="2024-04-28T23:16:25.045166963Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:16:25 addons-610300 dockerd[1323]: time="2024-04-28T23:16:25.158820568Z" level=info msg="ignoring event" container=6e6a9bda4a81a946bb8d74a3accb90bc80d362d4595626cd979418e131591ce8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:16:25 addons-610300 dockerd[1330]: time="2024-04-28T23:16:25.161982879Z" level=info msg="shim disconnected" id=6e6a9bda4a81a946bb8d74a3accb90bc80d362d4595626cd979418e131591ce8 namespace=moby
	Apr 28 23:16:25 addons-610300 dockerd[1330]: time="2024-04-28T23:16:25.162112180Z" level=warning msg="cleaning up after shim disconnected" id=6e6a9bda4a81a946bb8d74a3accb90bc80d362d4595626cd979418e131591ce8 namespace=moby
	Apr 28 23:16:25 addons-610300 dockerd[1330]: time="2024-04-28T23:16:25.162132280Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:16:25 addons-610300 dockerd[1330]: time="2024-04-28T23:16:25.396114614Z" level=info msg="shim disconnected" id=c1f6ef41b8656f92a8cb4c1683aab057a980c7a52f8fb001fec45a9c02b2314e namespace=moby
	Apr 28 23:16:25 addons-610300 dockerd[1330]: time="2024-04-28T23:16:25.396218014Z" level=warning msg="cleaning up after shim disconnected" id=c1f6ef41b8656f92a8cb4c1683aab057a980c7a52f8fb001fec45a9c02b2314e namespace=moby
	Apr 28 23:16:25 addons-610300 dockerd[1330]: time="2024-04-28T23:16:25.396233614Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:16:25 addons-610300 dockerd[1323]: time="2024-04-28T23:16:25.398000220Z" level=info msg="ignoring event" container=c1f6ef41b8656f92a8cb4c1683aab057a980c7a52f8fb001fec45a9c02b2314e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:16:25 addons-610300 dockerd[1330]: time="2024-04-28T23:16:25.431683940Z" level=warning msg="cleanup warnings time=\"2024-04-28T23:16:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 28 23:16:25 addons-610300 dockerd[1323]: time="2024-04-28T23:16:25.472028384Z" level=info msg="ignoring event" container=19d1c847022ea61a9cad00b7f37543800627732867589908b918010a3b44f257 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:16:25 addons-610300 dockerd[1330]: time="2024-04-28T23:16:25.472429186Z" level=info msg="shim disconnected" id=19d1c847022ea61a9cad00b7f37543800627732867589908b918010a3b44f257 namespace=moby
	Apr 28 23:16:25 addons-610300 dockerd[1330]: time="2024-04-28T23:16:25.472514886Z" level=warning msg="cleaning up after shim disconnected" id=19d1c847022ea61a9cad00b7f37543800627732867589908b918010a3b44f257 namespace=moby
	Apr 28 23:16:25 addons-610300 dockerd[1330]: time="2024-04-28T23:16:25.472529386Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:16:28 addons-610300 cri-dockerd[1228]: time="2024-04-28T23:16:28Z" level=error msg="error getting RW layer size for container ID 'ea135da5ddc836bdfb8b391b22da05a5713fa43220794353f1c275fb749bbb72': Error response from daemon: No such container: ea135da5ddc836bdfb8b391b22da05a5713fa43220794353f1c275fb749bbb72"
	Apr 28 23:16:28 addons-610300 cri-dockerd[1228]: time="2024-04-28T23:16:28Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ea135da5ddc836bdfb8b391b22da05a5713fa43220794353f1c275fb749bbb72'"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	2286f4117004b       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:abef4926f3e6f0aa50c968aa954f990a6b0178e04a955293a49d96810c43d0e1                            38 seconds ago       Exited              gadget                                   3                   22733b7ed7345       gadget-vg6dv
	9efaafe22d074       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 About a minute ago   Running             gcp-auth                                 0                   db52b13eab538       gcp-auth-5db96cd9b4-p6mqk
	3ce11ed32f457       registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c                             About a minute ago   Running             controller                               0                   fbd8dd1883b7d       ingress-nginx-controller-84df5799c-mzqpm
	2ff545cf18243       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   9226ed302dc35       csi-hostpathplugin-p2lk7
	ea4970bc08195       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   9226ed302dc35       csi-hostpathplugin-p2lk7
	6f9313749af63       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   9226ed302dc35       csi-hostpathplugin-p2lk7
	a3717e5c87f62       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   9226ed302dc35       csi-hostpathplugin-p2lk7
	3b03f9626d532       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   9226ed302dc35       csi-hostpathplugin-p2lk7
	adf2b27c9f4bb       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   bd0d38cbac163       csi-hostpath-resizer-0
	c3ae95a5f8489       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   950a77ddddc24       csi-hostpath-attacher-0
	711de27fa3528       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   9226ed302dc35       csi-hostpathplugin-p2lk7
	2ff25f0d6c04c       b29d748098e32                                                                                                                                About a minute ago   Exited              patch                                    1                   3502f95d06287       ingress-nginx-admission-patch-stxks
	1d1ca2a4bc186       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334                   About a minute ago   Exited              create                                   0                   3ae912b216d7b       ingress-nginx-admission-create-lzvjk
	5ba6f512bdded       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   68adca9587e79       snapshot-controller-745499f584-l68jj
	d0a3009f063a0       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   61e3c3c83c70e       snapshot-controller-745499f584-cd4l6
	e4e8f42b36327       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       About a minute ago   Running             local-path-provisioner                   0                   9a0f62b4e901c       local-path-provisioner-8d985888d-8xldg
	da6e23f201ccc       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   262cb52f75b31       yakd-dashboard-5ddbf7d777-jjtc6
	f07478b9c7654       gcr.io/cloud-spanner-emulator/emulator@sha256:538fb31f832e76c93f10035cb609c56fc5cd18b3cd85a3ba50699572c3c5dc50                               2 minutes ago        Running             cloud-spanner-emulator                   0                   756248fe70227       cloud-spanner-emulator-8677549d7-vfs7g
	a6a4791f50873       nvcr.io/nvidia/k8s-device-plugin@sha256:1aff0e9f0759758f87cb158d78241472af3a76cdc631f01ab395f997fa80f707                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   0334c8501919c       nvidia-device-plugin-daemonset-p6hd4
	676ab3ac0c3d7       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             3 minutes ago        Running             minikube-ingress-dns                     0                   5d43b6601a72f       kube-ingress-dns-minikube
	5f6cfd0bba55c       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   21826f5ed738d       storage-provisioner
	841302614fa0b       cbb01a7bd410d                                                                                                                                3 minutes ago        Running             coredns                                  0                   bc55c0c60f773       coredns-7db6d8ff4d-chb7x
	b929a14ae9a6f       a0bf559e280cf                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   1b85deb7e225a       kube-proxy-gv7gk
	d7d2f56d2e6bd       c7aad43836fa5                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   0e29fe677d66c       kube-controller-manager-addons-610300
	66d38f4c6c617       c42f13656d0b2                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   be1888b0653db       kube-apiserver-addons-610300
	ee4c21adcba9e       259c8277fcbbc                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   206c2e590fe9c       kube-scheduler-addons-610300
	dbf1ff9d12f99       3861cfcd7c04c                                                                                                                                4 minutes ago        Running             etcd                                     0                   6657efcbaeac5       etcd-addons-610300
	
	
	==> controller_ingress [3ce11ed32f45] <==
	W0428 23:15:31.136915       8 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0428 23:15:31.137274       8 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0428 23:15:31.144133       8 main.go:249] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.0" state="clean" commit="7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a" platform="linux/amd64"
	I0428 23:15:31.416222       8 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0428 23:15:31.593443       8 ssl.go:536] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0428 23:15:31.623924       8 nginx.go:265] "Starting NGINX Ingress controller"
	I0428 23:15:31.651147       8 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"c3388553-ca3d-4f3d-b4d7-bfa7135c5b70", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0428 23:15:31.652599       8 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"f2072a18-444b-4123-b630-bf9a82ec7b1a", APIVersion:"v1", ResourceVersion:"688", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0428 23:15:31.653672       8 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"42fcb1ce-6632-43ab-b681-a6c6c0591e32", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0428 23:15:32.826022       8 nginx.go:308] "Starting NGINX process"
	I0428 23:15:32.832137       8 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0428 23:15:32.834116       8 nginx.go:328] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0428 23:15:32.834374       8 controller.go:190] "Configuration changes detected, backend reload required"
	I0428 23:15:32.868654       8 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0428 23:15:32.869674       8 status.go:84] "New leader elected" identity="ingress-nginx-controller-84df5799c-mzqpm"
	I0428 23:15:32.884116       8 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-84df5799c-mzqpm" node="addons-610300"
	I0428 23:15:32.971320       8 controller.go:210] "Backend successfully reloaded"
	I0428 23:15:32.971400       8 controller.go:221] "Initial sync, sleeping for 1 second"
	I0428 23:15:32.971598       8 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-84df5799c-mzqpm", UID:"e9633b98-a153-4136-9ee6-2ceffff59b6e", APIVersion:"v1", ResourceVersion:"1218", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         71f78d49f0a496c31d4c19f095469f3f23900f8a
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.3
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [841302614fa0] <==
	[INFO] 10.244.0.8:42298 - 31559 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000223701s
	[INFO] 10.244.0.8:58944 - 53807 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000106401s
	[INFO] 10.244.0.8:58944 - 4137 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000174401s
	[INFO] 10.244.0.8:35570 - 10047 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000248401s
	[INFO] 10.244.0.8:35570 - 56380 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001329s
	[INFO] 10.244.0.8:35295 - 26762 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000107901s
	[INFO] 10.244.0.8:35295 - 5768 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001101s
	[INFO] 10.244.0.8:52376 - 29276 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000173801s
	[INFO] 10.244.0.8:52376 - 22361 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000434402s
	[INFO] 10.244.0.8:51365 - 18038 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001267s
	[INFO] 10.244.0.8:51365 - 3955 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001674s
	[INFO] 10.244.0.8:42041 - 57903 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000230701s
	[INFO] 10.244.0.8:42041 - 41762 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000566902s
	[INFO] 10.244.0.8:53312 - 5264 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0000971s
	[INFO] 10.244.0.8:53312 - 34196 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130301s
	[INFO] 10.244.0.22:53010 - 47403 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000251901s
	[INFO] 10.244.0.22:54072 - 49749 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0000909s
	[INFO] 10.244.0.22:51179 - 54539 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001288s
	[INFO] 10.244.0.22:59971 - 39037 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0000778s
	[INFO] 10.244.0.22:45488 - 9402 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001421s
	[INFO] 10.244.0.22:39511 - 56407 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108401s
	[INFO] 10.244.0.22:40441 - 36577 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 192 0.004621111s
	[INFO] 10.244.0.22:59617 - 38159 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.004962311s
	[INFO] 10.244.0.26:58457 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000773303s
	[INFO] 10.244.0.26:41081 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108901s
	
	
	==> describe nodes <==
	Name:               addons-610300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-610300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=addons-610300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T16_12_16_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-610300
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-610300"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Apr 2024 23:12:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-610300
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Apr 2024 23:16:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Apr 2024 23:16:22 +0000   Sun, 28 Apr 2024 23:12:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Apr 2024 23:16:22 +0000   Sun, 28 Apr 2024 23:12:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Apr 2024 23:16:22 +0000   Sun, 28 Apr 2024 23:12:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Apr 2024 23:16:22 +0000   Sun, 28 Apr 2024 23:12:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.234.130
	  Hostname:    addons-610300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 f76d3e9098034d39af09cd6dee5647d1
	  System UUID:                d42bebba-da9a-6544-99f2-fb3b1489966b
	  Boot ID:                    d1c56ee7-ecb1-461f-aa13-3cb827468a00
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-8677549d7-vfs7g      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  gadget                      gadget-vg6dv                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  gcp-auth                    gcp-auth-5db96cd9b4-p6mqk                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  ingress-nginx               ingress-nginx-controller-84df5799c-mzqpm    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m37s
	  kube-system                 coredns-7db6d8ff4d-chb7x                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m7s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 csi-hostpathplugin-p2lk7                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 etcd-addons-610300                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m24s
	  kube-system                 kube-apiserver-addons-610300                250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-controller-manager-addons-610300       200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-proxy-gv7gk                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-scheduler-addons-610300                100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 nvidia-device-plugin-daemonset-p6hd4        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 snapshot-controller-745499f584-cd4l6        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 snapshot-controller-745499f584-l68jj        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  local-path-storage          local-path-provisioner-8d985888d-8xldg      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-jjtc6             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m59s  kube-proxy       
	  Normal  Starting                 4m23s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m23s  kubelet          Node addons-610300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s  kubelet          Node addons-610300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s  kubelet          Node addons-610300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m21s  kubelet          Node addons-610300 status is now: NodeReady
	  Normal  RegisteredNode           4m9s   node-controller  Node addons-610300 event: Registered Node addons-610300 in Controller
	
	
	==> dmesg <==
	[  +5.576323] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.008694] kauditd_printk_skb: 47 callbacks suppressed
	[  +6.190183] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.014304] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.385697] kauditd_printk_skb: 44 callbacks suppressed
	[Apr28 23:13] kauditd_printk_skb: 102 callbacks suppressed
	[ +17.668623] kauditd_printk_skb: 70 callbacks suppressed
	[Apr28 23:14] kauditd_printk_skb: 6 callbacks suppressed
	[ +14.824794] kauditd_printk_skb: 2 callbacks suppressed
	[ +18.117559] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.586348] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.379365] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.138688] kauditd_printk_skb: 52 callbacks suppressed
	[Apr28 23:15] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.271738] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.592363] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.348354] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.756455] kauditd_printk_skb: 40 callbacks suppressed
	[  +7.754517] kauditd_printk_skb: 27 callbacks suppressed
	[  +9.481059] kauditd_printk_skb: 18 callbacks suppressed
	[Apr28 23:16] kauditd_printk_skb: 35 callbacks suppressed
	[  +3.422555] hrtimer: interrupt took 4205914 ns
	[  +3.103983] kauditd_printk_skb: 44 callbacks suppressed
	[  +8.996063] kauditd_printk_skb: 28 callbacks suppressed
	[  +8.410119] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [dbf1ff9d12f9] <==
	{"level":"warn","ts":"2024-04-28T23:15:45.145686Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.34855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-28T23:15:45.145759Z","caller":"traceutil/trace.go:171","msg":"trace[1048866063] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:1317; }","duration":"106.41525ms","start":"2024-04-28T23:15:45.03929Z","end":"2024-04-28T23:15:45.145706Z","steps":["trace[1048866063] 'agreement among raft nodes before linearized reading'  (duration: 106.37135ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-28T23:15:54.172321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"329.315049ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-28T23:15:54.172412Z","caller":"traceutil/trace.go:171","msg":"trace[1469210884] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1343; }","duration":"329.451849ms","start":"2024-04-28T23:15:53.842944Z","end":"2024-04-28T23:15:54.172396Z","steps":["trace[1469210884] 'range keys from in-memory index tree'  (duration: 329.256949ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-28T23:15:54.172444Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-28T23:15:53.842928Z","time spent":"329.507749ms","remote":"127.0.0.1:39734","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-04-28T23:15:54.323441Z","caller":"traceutil/trace.go:171","msg":"trace[1650223678] transaction","detail":"{read_only:false; response_revision:1344; number_of_response:1; }","duration":"143.821646ms","start":"2024-04-28T23:15:54.1796Z","end":"2024-04-28T23:15:54.323422Z","steps":["trace[1650223678] 'process raft request'  (duration: 143.713745ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-28T23:15:54.32436Z","caller":"traceutil/trace.go:171","msg":"trace[585570666] linearizableReadLoop","detail":"{readStateIndex:1411; appliedIndex:1411; }","duration":"143.437344ms","start":"2024-04-28T23:15:54.180911Z","end":"2024-04-28T23:15:54.324349Z","steps":["trace[585570666] 'read index received'  (duration: 143.431644ms)","trace[585570666] 'applied index is now lower than readState.Index'  (duration: 4.9µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-28T23:15:54.324707Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.555945ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-28T23:15:54.324898Z","caller":"traceutil/trace.go:171","msg":"trace[1960474633] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1344; }","duration":"143.998146ms","start":"2024-04-28T23:15:54.180889Z","end":"2024-04-28T23:15:54.324887Z","steps":["trace[1960474633] 'agreement among raft nodes before linearized reading'  (duration: 143.539344ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-28T23:15:54.826687Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.033447ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12066426699213573495 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1341 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-28T23:15:54.827092Z","caller":"traceutil/trace.go:171","msg":"trace[132746291] linearizableReadLoop","detail":"{readStateIndex:1413; appliedIndex:1411; }","duration":"502.665106ms","start":"2024-04-28T23:15:54.324412Z","end":"2024-04-28T23:15:54.827077Z","steps":["trace[132746291] 'read index received'  (duration: 384.086957ms)","trace[132746291] 'applied index is now lower than readState.Index'  (duration: 118.576449ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-28T23:15:54.827851Z","caller":"traceutil/trace.go:171","msg":"trace[519006885] transaction","detail":"{read_only:false; response_revision:1345; number_of_response:1; }","duration":"646.18975ms","start":"2024-04-28T23:15:54.181647Z","end":"2024-04-28T23:15:54.827837Z","steps":["trace[519006885] 'process raft request'  (duration: 526.915898ms)","trace[519006885] 'compare'  (duration: 117.680546ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-28T23:15:54.828082Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-28T23:15:54.181637Z","time spent":"646.28365ms","remote":"127.0.0.1:39916","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1341 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-04-28T23:15:54.828469Z","caller":"traceutil/trace.go:171","msg":"trace[869638517] transaction","detail":"{read_only:false; response_revision:1346; number_of_response:1; }","duration":"646.751252ms","start":"2024-04-28T23:15:54.181706Z","end":"2024-04-28T23:15:54.828457Z","steps":["trace[869638517] 'process raft request'  (duration: 645.117846ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-28T23:15:54.828607Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-28T23:15:54.181693Z","time spent":"646.885653ms","remote":"127.0.0.1:40062","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1337 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-04-28T23:15:54.829112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"538.522042ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:12933"}
	{"level":"info","ts":"2024-04-28T23:15:54.829194Z","caller":"traceutil/trace.go:171","msg":"trace[164071562] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1346; }","duration":"538.625142ms","start":"2024-04-28T23:15:54.29056Z","end":"2024-04-28T23:15:54.829185Z","steps":["trace[164071562] 'agreement among raft nodes before linearized reading'  (duration: 538.463141ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-28T23:15:54.82922Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-28T23:15:54.290545Z","time spent":"538.666942ms","remote":"127.0.0.1:39938","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":12956,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-04-28T23:15:54.83027Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"491.385363ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:12933"}
	{"level":"info","ts":"2024-04-28T23:15:54.830396Z","caller":"traceutil/trace.go:171","msg":"trace[1809601991] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1346; }","duration":"491.540664ms","start":"2024-04-28T23:15:54.338847Z","end":"2024-04-28T23:15:54.830387Z","steps":["trace[1809601991] 'agreement among raft nodes before linearized reading'  (duration: 491.338863ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-28T23:15:54.830537Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-28T23:15:54.33883Z","time spent":"491.587164ms","remote":"127.0.0.1:39938","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":12956,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-04-28T23:15:54.831252Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.782977ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-28T23:15:54.831305Z","caller":"traceutil/trace.go:171","msg":"trace[2016462261] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:1346; }","duration":"125.878277ms","start":"2024-04-28T23:15:54.705418Z","end":"2024-04-28T23:15:54.831296Z","steps":["trace[2016462261] 'agreement among raft nodes before linearized reading'  (duration: 125.809777ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-28T23:16:07.427699Z","caller":"traceutil/trace.go:171","msg":"trace[422585327] transaction","detail":"{read_only:false; response_revision:1447; number_of_response:1; }","duration":"418.527697ms","start":"2024-04-28T23:16:07.009152Z","end":"2024-04-28T23:16:07.427679Z","steps":["trace[422585327] 'process raft request'  (duration: 418.262597ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-28T23:16:07.428564Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-28T23:16:07.009121Z","time spent":"419.2828ms","remote":"127.0.0.1:39916","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1441 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> gcp-auth [9efaafe22d07] <==
	2024/04/28 23:15:35 GCP Auth Webhook started!
	2024/04/28 23:15:37 Ready to marshal response ...
	2024/04/28 23:15:37 Ready to write response ...
	2024/04/28 23:15:37 Ready to marshal response ...
	2024/04/28 23:15:37 Ready to write response ...
	2024/04/28 23:15:39 Ready to marshal response ...
	2024/04/28 23:15:39 Ready to write response ...
	2024/04/28 23:15:47 Ready to marshal response ...
	2024/04/28 23:15:47 Ready to write response ...
	2024/04/28 23:16:02 Ready to marshal response ...
	2024/04/28 23:16:02 Ready to write response ...
	2024/04/28 23:16:08 Ready to marshal response ...
	2024/04/28 23:16:08 Ready to write response ...
	2024/04/28 23:16:15 Ready to marshal response ...
	2024/04/28 23:16:15 Ready to write response ...
	
	
	==> kernel <==
	 23:16:38 up 6 min,  0 users,  load average: 2.98, 2.45, 1.11
	Linux addons-610300 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [66d38f4c6c61] <==
	Trace[802146093]: ---"About to write a response" 633ms (23:14:09.570)
	Trace[802146093]: [633.284395ms] [633.284395ms] END
	I0428 23:14:09.571761       1 trace.go:236] Trace[501171510]: "List" accept:application/json, */*,audit-id:e649c19b-9892-46de-a38d-51123fcfa616,client:172.27.224.1,api-group:,api-version:v1,name:,subresource:,namespace:gcp-auth,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (28-Apr-2024 23:14:09.034) (total time: 536ms):
	Trace[501171510]: ["List(recursive=true) etcd3" audit-id:e649c19b-9892-46de-a38d-51123fcfa616,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 536ms (23:14:09.035)]
	Trace[501171510]: [536.664315ms] [536.664315ms] END
	I0428 23:14:35.466307       1 trace.go:236] Trace[728407949]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.27.234.130,type:*v1.Endpoints,resource:apiServerIPInfo (28-Apr-2024 23:14:34.775) (total time: 690ms):
	Trace[728407949]: ---"Transaction prepared" 200ms (23:14:34.977)
	Trace[728407949]: ---"Txn call completed" 488ms (23:14:35.466)
	Trace[728407949]: [690.748914ms] [690.748914ms] END
	I0428 23:15:30.536788       1 trace.go:236] Trace[1723933536]: "List" accept:application/json, */*,audit-id:efbad23d-2759-4dea-a5a1-0a8cc11d221a,client:172.27.224.1,api-group:,api-version:v1,name:,subresource:,namespace:gcp-auth,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (28-Apr-2024 23:15:30.025) (total time: 510ms):
	Trace[1723933536]: ["List(recursive=true) etcd3" audit-id:efbad23d-2759-4dea-a5a1-0a8cc11d221a,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 510ms (23:15:30.025)]
	Trace[1723933536]: [510.775938ms] [510.775938ms] END
	I0428 23:15:54.832472       1 trace.go:236] Trace[1833563644]: "Update" accept:application/json, */*,audit-id:e94c231e-d4b1-456b-a02d-f15248099472,client:10.244.0.12,api-group:coordination.k8s.io,api-version:v1,name:snapshot-controller-leader,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/snapshot-controller-leader,user-agent:snapshot-controller/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (28-Apr-2024 23:15:54.180) (total time: 652ms):
	Trace[1833563644]: ["GuaranteedUpdate etcd3" audit-id:e94c231e-d4b1-456b-a02d-f15248099472,key:/leases/kube-system/snapshot-controller-leader,type:*coordination.Lease,resource:leases.coordination.k8s.io 651ms (23:15:54.180)
	Trace[1833563644]:  ---"Txn call completed" 651ms (23:15:54.832)]
	Trace[1833563644]: [652.072372ms] [652.072372ms] END
	I0428 23:15:54.832885       1 trace.go:236] Trace[1341819742]: "Update" accept:application/json, */*,audit-id:31dcb628-5010-4d00-9aab-f3d64d15f4f7,client:172.27.234.130,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (28-Apr-2024 23:15:54.180) (total time: 652ms):
	Trace[1341819742]: ["GuaranteedUpdate etcd3" audit-id:31dcb628-5010-4d00-9aab-f3d64d15f4f7,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 652ms (23:15:54.180)
	Trace[1341819742]:  ---"Txn call completed" 651ms (23:15:54.832)]
	Trace[1341819742]: [652.642374ms] [652.642374ms] END
	I0428 23:15:54.834041       1 trace.go:236] Trace[1956896386]: "List" accept:application/json, */*,audit-id:afc0ce7b-458e-49b2-b903-28c90ffba5d4,client:172.27.224.1,api-group:,api-version:v1,name:,subresource:,namespace:default,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (28-Apr-2024 23:15:54.289) (total time: 544ms):
	Trace[1956896386]: ["List(recursive=true) etcd3" audit-id:afc0ce7b-458e-49b2-b903-28c90ffba5d4,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: 544ms (23:15:54.289)]
	Trace[1956896386]: [544.270363ms] [544.270363ms] END
	I0428 23:15:58.635939       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0428 23:16:01.550471       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [d7d2f56d2e6b] <==
	I0428 23:14:59.529113       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0428 23:15:00.695379       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0428 23:15:01.001670       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0428 23:15:01.814285       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0428 23:15:01.876297       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0428 23:15:02.018276       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0428 23:15:02.052707       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0428 23:15:02.071950       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0428 23:15:02.530651       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0428 23:15:02.832434       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0428 23:15:02.851316       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0428 23:15:02.870980       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0428 23:15:02.989797       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0428 23:15:31.406669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="560.401µs"
	I0428 23:15:32.024768       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0428 23:15:32.034809       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0428 23:15:32.125929       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0428 23:15:32.130019       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0428 23:15:35.604772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="46.410204ms"
	I0428 23:15:35.605229       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="44.9µs"
	I0428 23:15:45.671240       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="48.824498ms"
	I0428 23:15:45.674134       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="47µs"
	I0428 23:15:56.618036       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="4.4µs"
	I0428 23:16:16.728476       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="7.1µs"
	I0428 23:16:24.903681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-6677d64bcd" duration="5.5µs"
	
	
	==> kube-proxy [b929a14ae9a6] <==
	I0428 23:12:38.669199       1 server_linux.go:69] "Using iptables proxy"
	I0428 23:12:38.924924       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.234.130"]
	I0428 23:12:39.308944       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0428 23:12:39.311892       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0428 23:12:39.311985       1 server_linux.go:165] "Using iptables Proxier"
	I0428 23:12:39.449980       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0428 23:12:39.450380       1 server.go:872] "Version info" version="v1.30.0"
	I0428 23:12:39.450418       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0428 23:12:39.453117       1 config.go:192] "Starting service config controller"
	I0428 23:12:39.453134       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0428 23:12:39.453221       1 config.go:101] "Starting endpoint slice config controller"
	I0428 23:12:39.453233       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0428 23:12:39.467163       1 config.go:319] "Starting node config controller"
	I0428 23:12:39.468563       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0428 23:12:39.559092       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0428 23:12:39.559206       1 shared_informer.go:320] Caches are synced for service config
	I0428 23:12:39.575493       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ee4c21adcba9] <==
	W0428 23:12:13.420362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0428 23:12:13.420785       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0428 23:12:13.421629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0428 23:12:13.421797       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0428 23:12:13.436680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0428 23:12:13.436915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0428 23:12:13.460522       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0428 23:12:13.460775       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0428 23:12:13.518268       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0428 23:12:13.518779       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0428 23:12:13.563893       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0428 23:12:13.564306       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0428 23:12:13.605458       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0428 23:12:13.605565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0428 23:12:13.618262       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0428 23:12:13.618672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0428 23:12:13.658262       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0428 23:12:13.658321       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0428 23:12:13.685520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0428 23:12:13.685568       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0428 23:12:13.701129       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0428 23:12:13.702119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0428 23:12:13.855097       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0428 23:12:13.856856       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0428 23:12:16.282073       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.722907    2114 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c08f6083-573b-4268-ad97-bbcfd146fef1-kube-api-access-wb8m2" (OuterVolumeSpecName: "kube-api-access-wb8m2") pod "c08f6083-573b-4268-ad97-bbcfd146fef1" (UID: "c08f6083-573b-4268-ad97-bbcfd146fef1"). InnerVolumeSpecName "kube-api-access-wb8m2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.801143    2114 scope.go:117] "RemoveContainer" containerID="6e6a9bda4a81a946bb8d74a3accb90bc80d362d4595626cd979418e131591ce8"
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.821160    2114 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^4f6bfa57-05b5-11ef-923c-7a6be274ad2f\") pod \"7d4c4d64-76d9-4289-a1fd-9270069e4e26\" (UID: \"7d4c4d64-76d9-4289-a1fd-9270069e4e26\") "
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.821352    2114 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7d4c4d64-76d9-4289-a1fd-9270069e4e26-gcp-creds\") pod \"7d4c4d64-76d9-4289-a1fd-9270069e4e26\" (UID: \"7d4c4d64-76d9-4289-a1fd-9270069e4e26\") "
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.821447    2114 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4fgg\" (UniqueName: \"kubernetes.io/projected/7d4c4d64-76d9-4289-a1fd-9270069e4e26-kube-api-access-q4fgg\") pod \"7d4c4d64-76d9-4289-a1fd-9270069e4e26\" (UID: \"7d4c4d64-76d9-4289-a1fd-9270069e4e26\") "
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.821573    2114 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wb8m2\" (UniqueName: \"kubernetes.io/projected/c08f6083-573b-4268-ad97-bbcfd146fef1-kube-api-access-wb8m2\") on node \"addons-610300\" DevicePath \"\""
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.821699    2114 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d4c4d64-76d9-4289-a1fd-9270069e4e26-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "7d4c4d64-76d9-4289-a1fd-9270069e4e26" (UID: "7d4c4d64-76d9-4289-a1fd-9270069e4e26"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.824514    2114 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d4c4d64-76d9-4289-a1fd-9270069e4e26-kube-api-access-q4fgg" (OuterVolumeSpecName: "kube-api-access-q4fgg") pod "7d4c4d64-76d9-4289-a1fd-9270069e4e26" (UID: "7d4c4d64-76d9-4289-a1fd-9270069e4e26"). InnerVolumeSpecName "kube-api-access-q4fgg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.827701    2114 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^4f6bfa57-05b5-11ef-923c-7a6be274ad2f" (OuterVolumeSpecName: "task-pv-storage") pod "7d4c4d64-76d9-4289-a1fd-9270069e4e26" (UID: "7d4c4d64-76d9-4289-a1fd-9270069e4e26"). InnerVolumeSpecName "pvc-bc6dc009-d332-46b5-b844-c5c026bcd312". PluginName "kubernetes.io/csi", VolumeGidValue ""
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.894014    2114 scope.go:117] "RemoveContainer" containerID="6e6a9bda4a81a946bb8d74a3accb90bc80d362d4595626cd979418e131591ce8"
	Apr 28 23:16:25 addons-610300 kubelet[2114]: E0428 23:16:25.896221    2114 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 6e6a9bda4a81a946bb8d74a3accb90bc80d362d4595626cd979418e131591ce8" containerID="6e6a9bda4a81a946bb8d74a3accb90bc80d362d4595626cd979418e131591ce8"
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.896306    2114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"6e6a9bda4a81a946bb8d74a3accb90bc80d362d4595626cd979418e131591ce8"} err="failed to get container status \"6e6a9bda4a81a946bb8d74a3accb90bc80d362d4595626cd979418e131591ce8\": rpc error: code = Unknown desc = Error response from daemon: No such container: 6e6a9bda4a81a946bb8d74a3accb90bc80d362d4595626cd979418e131591ce8"
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.896333    2114 scope.go:117] "RemoveContainer" containerID="ea135da5ddc836bdfb8b391b22da05a5713fa43220794353f1c275fb749bbb72"
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.922599    2114 reconciler_common.go:289] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7d4c4d64-76d9-4289-a1fd-9270069e4e26-gcp-creds\") on node \"addons-610300\" DevicePath \"\""
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.922765    2114 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-q4fgg\" (UniqueName: \"kubernetes.io/projected/7d4c4d64-76d9-4289-a1fd-9270069e4e26-kube-api-access-q4fgg\") on node \"addons-610300\" DevicePath \"\""
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.922821    2114 reconciler_common.go:282] "operationExecutor.UnmountDevice started for volume \"pvc-bc6dc009-d332-46b5-b844-c5c026bcd312\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^4f6bfa57-05b5-11ef-923c-7a6be274ad2f\") on node \"addons-610300\" "
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.931666    2114 operation_generator.go:1001] UnmountDevice succeeded for volume "pvc-bc6dc009-d332-46b5-b844-c5c026bcd312" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^4f6bfa57-05b5-11ef-923c-7a6be274ad2f") on node "addons-610300"
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.972082    2114 scope.go:117] "RemoveContainer" containerID="ea135da5ddc836bdfb8b391b22da05a5713fa43220794353f1c275fb749bbb72"
	Apr 28 23:16:25 addons-610300 kubelet[2114]: E0428 23:16:25.973555    2114 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ea135da5ddc836bdfb8b391b22da05a5713fa43220794353f1c275fb749bbb72" containerID="ea135da5ddc836bdfb8b391b22da05a5713fa43220794353f1c275fb749bbb72"
	Apr 28 23:16:25 addons-610300 kubelet[2114]: I0428 23:16:25.973808    2114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ea135da5ddc836bdfb8b391b22da05a5713fa43220794353f1c275fb749bbb72"} err="failed to get container status \"ea135da5ddc836bdfb8b391b22da05a5713fa43220794353f1c275fb749bbb72\": rpc error: code = Unknown desc = Error response from daemon: No such container: ea135da5ddc836bdfb8b391b22da05a5713fa43220794353f1c275fb749bbb72"
	Apr 28 23:16:26 addons-610300 kubelet[2114]: I0428 23:16:26.023427    2114 reconciler_common.go:289] "Volume detached for volume \"pvc-bc6dc009-d332-46b5-b844-c5c026bcd312\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^4f6bfa57-05b5-11ef-923c-7a6be274ad2f\") on node \"addons-610300\" DevicePath \"\""
	Apr 28 23:16:27 addons-610300 kubelet[2114]: I0428 23:16:27.594225    2114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d4c4d64-76d9-4289-a1fd-9270069e4e26" path="/var/lib/kubelet/pods/7d4c4d64-76d9-4289-a1fd-9270069e4e26/volumes"
	Apr 28 23:16:27 addons-610300 kubelet[2114]: I0428 23:16:27.595299    2114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c08f6083-573b-4268-ad97-bbcfd146fef1" path="/var/lib/kubelet/pods/c08f6083-573b-4268-ad97-bbcfd146fef1/volumes"
	Apr 28 23:16:33 addons-610300 kubelet[2114]: I0428 23:16:33.550889    2114 scope.go:117] "RemoveContainer" containerID="2286f4117004bfc5b16f814d8d355dc68571bab06c45a7be1b7318dc3762b1e5"
	Apr 28 23:16:33 addons-610300 kubelet[2114]: E0428 23:16:33.552512    2114 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 40s restarting failed container=gadget pod=gadget-vg6dv_gadget(b790d080-a046-4cb5-8d24-03ea6531cfd7)\"" pod="gadget/gadget-vg6dv" podUID="b790d080-a046-4cb5-8d24-03ea6531cfd7"
	
	
	==> storage-provisioner [5f6cfd0bba55] <==
	I0428 23:12:55.382628       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0428 23:12:55.402248       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0428 23:12:55.402320       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0428 23:12:55.638936       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0428 23:12:55.639117       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-610300_2ac51d97-8be3-4056-b4f4-a3d21d6efb2d!
	I0428 23:12:55.639554       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4a78b731-a00e-4a3f-8704-e91a3ac5bc57", APIVersion:"v1", ResourceVersion:"536", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-610300_2ac51d97-8be3-4056-b4f4-a3d21d6efb2d became leader
	I0428 23:12:55.840591       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-610300_2ac51d97-8be3-4056-b4f4-a3d21d6efb2d!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:16:29.312265    9388 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-610300 -n addons-610300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-610300 -n addons-610300: (12.803993s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-610300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: headlamp-7559bf459f-smhrc ingress-nginx-admission-create-lzvjk ingress-nginx-admission-patch-stxks
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-610300 describe pod headlamp-7559bf459f-smhrc ingress-nginx-admission-create-lzvjk ingress-nginx-admission-patch-stxks
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-610300 describe pod headlamp-7559bf459f-smhrc ingress-nginx-admission-create-lzvjk ingress-nginx-admission-patch-stxks: exit status 1 (207.0398ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "headlamp-7559bf459f-smhrc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-lzvjk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-stxks" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-610300 describe pod headlamp-7559bf459f-smhrc ingress-nginx-admission-create-lzvjk ingress-nginx-admission-patch-stxks: exit status 1
--- FAIL: TestAddons/parallel/Registry (77.50s)

                                                
                                    
x
+
TestForceSystemdFlag (638.99s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-178900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
E0428 18:58:41.015820    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
docker_test.go:91: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-flag-178900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: exit status 90 (8m24.5304116s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-178900] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17977
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "force-systemd-flag-178900" primary control-plane node in "force-systemd-flag-178900" cluster
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 18:57:25.286450    9320 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0428 18:57:25.288428    9320 out.go:291] Setting OutFile to fd 2016 ...
	I0428 18:57:25.289369    9320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 18:57:25.289369    9320 out.go:304] Setting ErrFile to fd 1688...
	I0428 18:57:25.289369    9320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 18:57:25.317961    9320 out.go:298] Setting JSON to false
	I0428 18:57:25.326855    9320 start.go:129] hostinfo: {"hostname":"minikube1","uptime":13288,"bootTime":1714342556,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 18:57:25.326855    9320 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 18:57:25.333773    9320 out.go:177] * [force-systemd-flag-178900] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 18:57:25.337466    9320 notify.go:220] Checking for updates...
	I0428 18:57:25.340398    9320 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:57:25.343171    9320 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 18:57:25.345793    9320 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 18:57:25.348322    9320 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 18:57:25.350782    9320 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 18:57:25.354191    9320 config.go:182] Loaded profile config "force-systemd-env-844300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:57:25.355002    9320 config.go:182] Loaded profile config "kubernetes-upgrade-069600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0428 18:57:25.355714    9320 config.go:182] Loaded profile config "offline-docker-069600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:57:25.355896    9320 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 18:57:30.569756    9320 out.go:177] * Using the hyperv driver based on user configuration
	I0428 18:57:30.573235    9320 start.go:297] selected driver: hyperv
	I0428 18:57:30.573339    9320 start.go:901] validating driver "hyperv" against <nil>
	I0428 18:57:30.573339    9320 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 18:57:30.620827    9320 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 18:57:30.622310    9320 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0428 18:57:30.622382    9320 cni.go:84] Creating CNI manager for ""
	I0428 18:57:30.622477    9320 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0428 18:57:30.622477    9320 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0428 18:57:30.622711    9320 start.go:340] cluster config:
	{Name:force-systemd-flag-178900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-178900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I0428 18:57:30.623067    9320 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 18:57:30.626538    9320 out.go:177] * Starting "force-systemd-flag-178900" primary control-plane node in "force-systemd-flag-178900" cluster
	I0428 18:57:30.629844    9320 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 18:57:30.629844    9320 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 18:57:30.629844    9320 cache.go:56] Caching tarball of preloaded images
	I0428 18:57:30.630888    9320 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 18:57:30.630888    9320 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 18:57:30.630888    9320 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-178900\config.json ...
	I0428 18:57:30.631467    9320 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-178900\config.json: {Name:mk3abe277f19524ffadf6f2991ff18511667786e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:57:30.632623    9320 start.go:360] acquireMachinesLock for force-systemd-flag-178900: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 19:02:19.528151    9320 start.go:364] duration metric: took 4m48.8947114s to acquireMachinesLock for "force-systemd-flag-178900"
	I0428 19:02:19.528399    9320 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-178900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-178900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 19:02:19.528663    9320 start.go:125] createHost starting for "" (driver="hyperv")
	I0428 19:02:19.544471    9320 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0428 19:02:19.545061    9320 start.go:159] libmachine.API.Create for "force-systemd-flag-178900" (driver="hyperv")
	I0428 19:02:19.545452    9320 client.go:168] LocalClient.Create starting
	I0428 19:02:19.546254    9320 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 19:02:19.546562    9320 main.go:141] libmachine: Decoding PEM data...
	I0428 19:02:19.546716    9320 main.go:141] libmachine: Parsing certificate...
	I0428 19:02:19.546956    9320 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 19:02:19.547170    9320 main.go:141] libmachine: Decoding PEM data...
	I0428 19:02:19.547257    9320 main.go:141] libmachine: Parsing certificate...
	I0428 19:02:19.547470    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 19:02:21.544396    9320 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 19:02:21.544396    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:02:21.544396    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 19:02:23.386978    9320 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 19:02:23.387180    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:02:23.387323    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 19:02:25.021212    9320 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 19:02:25.022063    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:02:25.022164    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 19:02:28.396251    9320 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 19:02:28.396822    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:02:28.400718    9320 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 19:02:28.928532    9320 main.go:141] libmachine: Creating SSH key...
	I0428 19:02:29.160412    9320 main.go:141] libmachine: Creating VM...
	I0428 19:02:29.160571    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 19:02:32.222033    9320 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 19:02:32.222033    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:02:32.222270    9320 main.go:141] libmachine: Using switch "Default Switch"
	I0428 19:02:32.222270    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 19:02:34.048804    9320 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 19:02:34.049384    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:02:34.049384    9320 main.go:141] libmachine: Creating VHD
	I0428 19:02:34.049514    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-178900\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 19:02:37.882289    9320 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-178900\
	                          fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D15E6B7C-5012-4CC4-B606-F7D2E71150A6
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 19:02:37.882289    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:02:37.882434    9320 main.go:141] libmachine: Writing magic tar header
	I0428 19:02:37.882500    9320 main.go:141] libmachine: Writing SSH key tar header
	I0428 19:02:37.892550    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-178900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-178900\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 19:02:41.017770    9320 main.go:141] libmachine: [stdout =====>] : 
	I0428 19:02:41.018547    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:02:41.018630    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-178900\disk.vhd' -SizeBytes 20000MB
	I0428 19:02:43.673726    9320 main.go:141] libmachine: [stdout =====>] : 
	I0428 19:02:43.673803    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:02:43.673869    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM force-systemd-flag-178900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-178900' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0428 19:02:48.916876    9320 main.go:141] libmachine: [stdout =====>] : 
	Name                      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                      ----- ----------- ----------------- ------   ------             -------
	force-systemd-flag-178900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 19:02:48.916876    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:02:48.917702    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName force-systemd-flag-178900 -DynamicMemoryEnabled $false
	I0428 19:02:51.048393    9320 main.go:141] libmachine: [stdout =====>] : 
	I0428 19:02:51.048393    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:02:51.048488    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor force-systemd-flag-178900 -Count 2
	I0428 19:02:53.242369    9320 main.go:141] libmachine: [stdout =====>] : 
	I0428 19:02:53.242369    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:02:53.242369    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName force-systemd-flag-178900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-178900\boot2docker.iso'
	I0428 19:02:55.719048    9320 main.go:141] libmachine: [stdout =====>] : 
	I0428 19:02:55.719048    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:02:55.719048    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName force-systemd-flag-178900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-178900\disk.vhd'
	I0428 19:02:58.256809    9320 main.go:141] libmachine: [stdout =====>] : 
	I0428 19:02:58.256809    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:02:58.256809    9320 main.go:141] libmachine: Starting VM...
	I0428 19:02:58.256809    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM force-systemd-flag-178900
	I0428 19:03:01.242967    9320 main.go:141] libmachine: [stdout =====>] : 
	I0428 19:03:01.243574    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:01.243574    9320 main.go:141] libmachine: Waiting for host to start...
	I0428 19:03:01.243638    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:03:03.416279    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:03:03.416279    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:03.416279    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:03:05.953784    9320 main.go:141] libmachine: [stdout =====>] : 
	I0428 19:03:05.953852    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:06.962576    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:03:09.190229    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:03:09.190229    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:09.190229    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:03:12.070262    9320 main.go:141] libmachine: [stdout =====>] : 
	I0428 19:03:12.070340    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:13.077786    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:03:15.461162    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:03:15.461764    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:15.461764    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:03:18.088348    9320 main.go:141] libmachine: [stdout =====>] : 
	I0428 19:03:18.088348    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:19.095076    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:03:21.380280    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:03:21.380280    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:21.380649    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:03:24.012535    9320 main.go:141] libmachine: [stdout =====>] : 
	I0428 19:03:24.012535    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:25.012930    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:03:27.189496    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:03:27.189584    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:27.189763    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:03:29.830349    9320 main.go:141] libmachine: [stdout =====>] : 172.27.225.207
	
	I0428 19:03:29.830424    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:29.830510    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:03:31.939282    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:03:31.939384    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:31.939384    9320 machine.go:94] provisionDockerMachine start ...
	I0428 19:03:31.939476    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:03:34.119313    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:03:34.119767    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:34.119767    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:03:36.687231    9320 main.go:141] libmachine: [stdout =====>] : 172.27.225.207
	
	I0428 19:03:36.687366    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:36.694443    9320 main.go:141] libmachine: Using SSH client type: native
	I0428 19:03:36.695444    9320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.225.207 22 <nil> <nil>}
	I0428 19:03:36.695472    9320 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 19:03:36.848258    9320 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 19:03:36.848258    9320 buildroot.go:166] provisioning hostname "force-systemd-flag-178900"
	I0428 19:03:36.848801    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:03:38.930569    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:03:38.930569    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:38.930569    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:03:41.476077    9320 main.go:141] libmachine: [stdout =====>] : 172.27.225.207
	
	I0428 19:03:41.476665    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:41.482389    9320 main.go:141] libmachine: Using SSH client type: native
	I0428 19:03:41.482457    9320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.225.207 22 <nil> <nil>}
	I0428 19:03:41.482457    9320 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-178900 && echo "force-systemd-flag-178900" | sudo tee /etc/hostname
	I0428 19:03:41.661381    9320 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-178900
	
	I0428 19:03:41.661381    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:03:43.736784    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:03:43.736784    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:43.737778    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:03:46.263031    9320 main.go:141] libmachine: [stdout =====>] : 172.27.225.207
	
	I0428 19:03:46.263346    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:46.268358    9320 main.go:141] libmachine: Using SSH client type: native
	I0428 19:03:46.269045    9320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.225.207 22 <nil> <nil>}
	I0428 19:03:46.269045    9320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-178900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-178900/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-178900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 19:03:46.443392    9320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 19:03:46.443392    9320 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 19:03:46.443392    9320 buildroot.go:174] setting up certificates
	I0428 19:03:46.443392    9320 provision.go:84] configureAuth start
	I0428 19:03:46.443392    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:03:48.540809    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:03:48.540809    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:48.540960    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:03:51.009818    9320 main.go:141] libmachine: [stdout =====>] : 172.27.225.207
	
	I0428 19:03:51.010127    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:51.010240    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:03:53.096446    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:03:53.096446    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:53.096446    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:03:55.635323    9320 main.go:141] libmachine: [stdout =====>] : 172.27.225.207
	
	I0428 19:03:55.635323    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:55.635323    9320 provision.go:143] copyHostCerts
	I0428 19:03:55.636112    9320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 19:03:55.636498    9320 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 19:03:55.636555    9320 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 19:03:55.636555    9320 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 19:03:55.637864    9320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 19:03:55.637864    9320 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 19:03:55.637864    9320 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 19:03:55.637864    9320 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 19:03:55.639448    9320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 19:03:55.639627    9320 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 19:03:55.639627    9320 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 19:03:55.639627    9320 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 19:03:55.641007    9320 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-flag-178900 san=[127.0.0.1 172.27.225.207 force-systemd-flag-178900 localhost minikube]
	I0428 19:03:55.780224    9320 provision.go:177] copyRemoteCerts
	I0428 19:03:55.793741    9320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 19:03:55.793821    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:03:57.912346    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:03:57.912381    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:03:57.912473    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:04:00.432459    9320 main.go:141] libmachine: [stdout =====>] : 172.27.225.207
	
	I0428 19:04:00.436498    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:00.437883    9320 sshutil.go:53] new ssh client: &{IP:172.27.225.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-178900\id_rsa Username:docker}
	I0428 19:04:00.553131    9320 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7593765s)
	I0428 19:04:00.553131    9320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 19:04:00.553686    9320 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 19:04:00.605012    9320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 19:04:00.605763    9320 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0428 19:04:00.657964    9320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 19:04:00.658607    9320 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0428 19:04:00.706855    9320 provision.go:87] duration metric: took 14.2633342s to configureAuth
	I0428 19:04:00.706920    9320 buildroot.go:189] setting minikube options for container-runtime
	I0428 19:04:00.707517    9320 config.go:182] Loaded profile config "force-systemd-flag-178900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 19:04:00.707517    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:04:02.833508    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:04:02.833582    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:02.833582    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:04:05.389067    9320 main.go:141] libmachine: [stdout =====>] : 172.27.225.207
	
	I0428 19:04:05.484247    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:05.490186    9320 main.go:141] libmachine: Using SSH client type: native
	I0428 19:04:05.491244    9320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.225.207 22 <nil> <nil>}
	I0428 19:04:05.491340    9320 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 19:04:05.641235    9320 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 19:04:05.641235    9320 buildroot.go:70] root file system type: tmpfs
	I0428 19:04:05.641701    9320 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 19:04:05.641701    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:04:07.707087    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:04:07.707146    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:07.707211    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:04:10.305665    9320 main.go:141] libmachine: [stdout =====>] : 172.27.225.207
	
	I0428 19:04:10.305876    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:10.311339    9320 main.go:141] libmachine: Using SSH client type: native
	I0428 19:04:10.312134    9320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.225.207 22 <nil> <nil>}
	I0428 19:04:10.312134    9320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 19:04:10.479763    9320 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 19:04:10.479975    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:04:12.572031    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:04:12.572839    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:12.572839    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:04:15.100085    9320 main.go:141] libmachine: [stdout =====>] : 172.27.225.207
	
	I0428 19:04:15.100344    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:15.107147    9320 main.go:141] libmachine: Using SSH client type: native
	I0428 19:04:15.107634    9320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.225.207 22 <nil> <nil>}
	I0428 19:04:15.107710    9320 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 19:04:17.624649    9320 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 19:04:17.624649    9320 machine.go:97] duration metric: took 45.6851413s to provisionDockerMachine
	I0428 19:04:17.624649    9320 client.go:171] duration metric: took 1m58.0788819s to LocalClient.Create
	I0428 19:04:17.624649    9320 start.go:167] duration metric: took 1m58.0792734s to libmachine.API.Create "force-systemd-flag-178900"
	I0428 19:04:17.624649    9320 start.go:293] postStartSetup for "force-systemd-flag-178900" (driver="hyperv")
	I0428 19:04:17.625186    9320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 19:04:17.643387    9320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 19:04:17.643387    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:04:19.767238    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:04:19.767342    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:19.767342    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:04:22.664304    9320 main.go:141] libmachine: [stdout =====>] : 172.27.225.207
	
	I0428 19:04:22.664360    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:22.664652    9320 sshutil.go:53] new ssh client: &{IP:172.27.225.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-178900\id_rsa Username:docker}
	I0428 19:04:22.809490    9320 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1659567s)
	I0428 19:04:22.823073    9320 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 19:04:22.831610    9320 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 19:04:22.831688    9320 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 19:04:22.832418    9320 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 19:04:22.834268    9320 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 19:04:22.834357    9320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 19:04:22.849971    9320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 19:04:22.870365    9320 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 19:04:22.924430    9320 start.go:296] duration metric: took 5.2992298s for postStartSetup
	I0428 19:04:22.927575    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:04:25.044184    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:04:25.044236    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:25.044360    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:04:27.548072    9320 main.go:141] libmachine: [stdout =====>] : 172.27.225.207
	
	I0428 19:04:27.548072    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:27.548924    9320 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-178900\config.json ...
	I0428 19:04:27.551867    9320 start.go:128] duration metric: took 2m8.0228624s to createHost
	I0428 19:04:27.551961    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:04:29.632648    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:04:29.632648    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:29.633281    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:04:32.083067    9320 main.go:141] libmachine: [stdout =====>] : 172.27.225.207
	
	I0428 19:04:32.083574    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:32.088970    9320 main.go:141] libmachine: Using SSH client type: native
	I0428 19:04:32.089760    9320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.225.207 22 <nil> <nil>}
	I0428 19:04:32.089760    9320 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0428 19:04:32.238415    9320 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714356272.234651527
	
	I0428 19:04:32.238526    9320 fix.go:216] guest clock: 1714356272.234651527
	I0428 19:04:32.238526    9320 fix.go:229] Guest: 2024-04-28 19:04:32.234651527 -0700 PDT Remote: 2024-04-28 19:04:27.5519615 -0700 PDT m=+422.357088501 (delta=4.682690027s)
	I0428 19:04:32.238526    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:04:34.262998    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:04:34.262998    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:34.263274    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:04:36.722969    9320 main.go:141] libmachine: [stdout =====>] : 172.27.225.207
	
	I0428 19:04:36.723374    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:36.729219    9320 main.go:141] libmachine: Using SSH client type: native
	I0428 19:04:36.729354    9320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.225.207 22 <nil> <nil>}
	I0428 19:04:36.729354    9320 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714356272
	I0428 19:04:36.884460    9320 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 02:04:32 UTC 2024
	
	I0428 19:04:36.884460    9320 fix.go:236] clock set: Mon Apr 29 02:04:32 UTC 2024
	 (err=<nil>)
	I0428 19:04:36.884460    9320 start.go:83] releasing machines lock for "force-systemd-flag-178900", held for 2m17.355843s
	I0428 19:04:36.884460    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:04:38.949764    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:04:38.949764    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:38.950355    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:04:41.458106    9320 main.go:141] libmachine: [stdout =====>] : 172.27.225.207
	
	I0428 19:04:41.458106    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:41.463302    9320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 19:04:41.463302    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:04:41.473023    9320 ssh_runner.go:195] Run: cat /version.json
	I0428 19:04:41.473023    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-178900 ).state
	I0428 19:04:43.631549    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:04:43.631619    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:43.631619    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:04:43.666490    9320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 19:04:43.666812    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:43.666907    9320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-178900 ).networkadapters[0]).ipaddresses[0]
	I0428 19:04:46.361195    9320 main.go:141] libmachine: [stdout =====>] : 172.27.225.207
	
	I0428 19:04:46.361434    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:46.361591    9320 sshutil.go:53] new ssh client: &{IP:172.27.225.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-178900\id_rsa Username:docker}
	I0428 19:04:46.392045    9320 main.go:141] libmachine: [stdout =====>] : 172.27.225.207
	
	I0428 19:04:46.392045    9320 main.go:141] libmachine: [stderr =====>] : 
	I0428 19:04:46.392045    9320 sshutil.go:53] new ssh client: &{IP:172.27.225.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-178900\id_rsa Username:docker}
	I0428 19:04:46.541285    9320 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0779695s)
	I0428 19:04:46.541285    9320 ssh_runner.go:235] Completed: cat /version.json: (5.0682487s)
	I0428 19:04:46.554807    9320 ssh_runner.go:195] Run: systemctl --version
	I0428 19:04:46.576491    9320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 19:04:46.584514    9320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 19:04:46.595492    9320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 19:04:46.632696    9320 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 19:04:46.632696    9320 start.go:494] detecting cgroup driver to use...
	I0428 19:04:46.632696    9320 start.go:498] using "systemd" cgroup driver as enforced via flags
	I0428 19:04:46.632696    9320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 19:04:46.689961    9320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 19:04:46.721869    9320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 19:04:46.743615    9320 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0428 19:04:46.756660    9320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0428 19:04:46.787129    9320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 19:04:46.821550    9320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 19:04:46.852569    9320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 19:04:46.896279    9320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 19:04:46.934314    9320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 19:04:46.967213    9320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 19:04:47.001168    9320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 19:04:47.038977    9320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 19:04:47.071463    9320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 19:04:47.105512    9320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 19:04:47.304408    9320 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 19:04:47.340422    9320 start.go:494] detecting cgroup driver to use...
	I0428 19:04:47.341412    9320 start.go:498] using "systemd" cgroup driver as enforced via flags
	I0428 19:04:47.358425    9320 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 19:04:47.395590    9320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 19:04:47.436562    9320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 19:04:47.483021    9320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 19:04:47.522007    9320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 19:04:47.563005    9320 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 19:04:47.627780    9320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 19:04:47.654106    9320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 19:04:47.700150    9320 ssh_runner.go:195] Run: which cri-dockerd
	I0428 19:04:47.719223    9320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 19:04:47.740600    9320 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 19:04:47.784951    9320 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 19:04:47.986948    9320 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 19:04:48.199518    9320 docker.go:574] configuring docker to use "systemd" as cgroup driver...
	I0428 19:04:48.199647    9320 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0428 19:04:48.246539    9320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 19:04:48.460419    9320 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 19:05:49.603410    9320 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1426982s)
	I0428 19:05:49.615893    9320 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0428 19:05:49.651842    9320 out.go:177] 
	W0428 19:05:49.654812    9320 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 02:04:15 force-systemd-flag-178900 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:15.873770919Z" level=info msg="Starting up"
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:15.876488279Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:15.881582105Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.925156371Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.963554512Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.963741010Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.963909807Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.964097004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.964328301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.964510198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.965180589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.965323686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.965626782Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.965648982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.965827079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.966197874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.970824106Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.971118402Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.971451097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.971592495Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.971945290Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.972207486Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.972421883Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.048880393Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.049074491Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.049103390Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.049125690Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.049143990Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.049589884Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050066177Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050230675Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050277874Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050297974Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050314574Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050336173Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050352473Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050372373Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050429772Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050523471Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050548770Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050564370Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050587770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050604670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050620369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050636769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050658269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050674569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050688469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050705268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050720868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050739868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050764267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050781867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050796667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050814467Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050838566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050854466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050874266Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050967865Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.051134662Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.051158062Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.051172962Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.051371459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.051624856Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.051647055Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.052119849Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.052427245Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.052733141Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.052806040Z" level=info msg="containerd successfully booted in 0.133608s"
	Apr 29 02:04:17 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:17.011150772Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 02:04:17 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:17.062091020Z" level=info msg="Loading containers: start."
	Apr 29 02:04:17 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:17.406440589Z" level=info msg="Loading containers: done."
	Apr 29 02:04:17 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:17.447150863Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 02:04:17 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:17.447590670Z" level=info msg="Daemon has completed initialization"
	Apr 29 02:04:17 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:17.613904518Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 02:04:17 force-systemd-flag-178900 systemd[1]: Started Docker Application Container Engine.
	Apr 29 02:04:17 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:17.615548341Z" level=info msg="API listen on [::]:2376"
	Apr 29 02:04:48 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:48.483439109Z" level=info msg="Processing signal 'terminated'"
	Apr 29 02:04:48 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:48.484950450Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 02:04:48 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:48.485434563Z" level=info msg="Daemon shutdown complete"
	Apr 29 02:04:48 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:48.485487064Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 02:04:48 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:48.485526865Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 02:04:48 force-systemd-flag-178900 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 02:04:49 force-systemd-flag-178900 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 02:04:49 force-systemd-flag-178900 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 02:04:49 force-systemd-flag-178900 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 02:04:49 force-systemd-flag-178900 dockerd[1023]: time="2024-04-29T02:04:49.575561797Z" level=info msg="Starting up"
	Apr 29 02:05:49 force-systemd-flag-178900 dockerd[1023]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 02:05:49 force-systemd-flag-178900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 02:05:49 force-systemd-flag-178900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 02:05:49 force-systemd-flag-178900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 02:04:15 force-systemd-flag-178900 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:15.873770919Z" level=info msg="Starting up"
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:15.876488279Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:15.881582105Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.925156371Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.963554512Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.963741010Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.963909807Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.964097004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.964328301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.964510198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.965180589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.965323686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.965626782Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.965648982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.965827079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.966197874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.970824106Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.971118402Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.971451097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.971592495Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.971945290Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.972207486Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 02:04:15 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:15.972421883Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.048880393Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.049074491Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.049103390Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.049125690Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.049143990Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.049589884Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050066177Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050230675Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050277874Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050297974Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050314574Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050336173Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050352473Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050372373Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050429772Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050523471Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050548770Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050564370Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050587770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050604670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050620369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050636769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050658269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050674569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050688469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050705268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050720868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050739868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050764267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050781867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050796667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050814467Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050838566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050854466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050874266Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.050967865Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.051134662Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.051158062Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.051172962Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.051371459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.051624856Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.051647055Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.052119849Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.052427245Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.052733141Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 02:04:16 force-systemd-flag-178900 dockerd[673]: time="2024-04-29T02:04:16.052806040Z" level=info msg="containerd successfully booted in 0.133608s"
	Apr 29 02:04:17 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:17.011150772Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 02:04:17 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:17.062091020Z" level=info msg="Loading containers: start."
	Apr 29 02:04:17 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:17.406440589Z" level=info msg="Loading containers: done."
	Apr 29 02:04:17 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:17.447150863Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 02:04:17 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:17.447590670Z" level=info msg="Daemon has completed initialization"
	Apr 29 02:04:17 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:17.613904518Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 02:04:17 force-systemd-flag-178900 systemd[1]: Started Docker Application Container Engine.
	Apr 29 02:04:17 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:17.615548341Z" level=info msg="API listen on [::]:2376"
	Apr 29 02:04:48 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:48.483439109Z" level=info msg="Processing signal 'terminated'"
	Apr 29 02:04:48 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:48.484950450Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 02:04:48 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:48.485434563Z" level=info msg="Daemon shutdown complete"
	Apr 29 02:04:48 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:48.485487064Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 02:04:48 force-systemd-flag-178900 dockerd[667]: time="2024-04-29T02:04:48.485526865Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 02:04:48 force-systemd-flag-178900 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 02:04:49 force-systemd-flag-178900 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 02:04:49 force-systemd-flag-178900 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 02:04:49 force-systemd-flag-178900 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 02:04:49 force-systemd-flag-178900 dockerd[1023]: time="2024-04-29T02:04:49.575561797Z" level=info msg="Starting up"
	Apr 29 02:05:49 force-systemd-flag-178900 dockerd[1023]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 02:05:49 force-systemd-flag-178900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 02:05:49 force-systemd-flag-178900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 02:05:49 force-systemd-flag-178900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0428 19:05:49.655896    9320 out.go:239] * 
	* 
	W0428 19:05:49.656762    9320 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 19:05:49.661983    9320 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-flag-178900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv" : exit status 90
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-178900 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-178900 ssh "docker info --format {{.CgroupDriver}}": (1m0.0400187s)
docker_test.go:115: expected systemd cgroup driver, got: 
-- stdout --
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 19:05:50.057823   10832 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
panic.go:626: *** TestForceSystemdFlag FAILED at 2024-04-28 19:06:49.9684815 -0700 PDT m=+10692.606197001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-178900 -n force-systemd-flag-178900
E0428 19:06:59.655288    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-178900 -n force-systemd-flag-178900: exit status 6 (13.9497522s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 19:06:50.104690   12324 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0428 19:07:03.843554   12324 status.go:417] kubeconfig endpoint: get endpoint: "force-systemd-flag-178900" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "force-systemd-flag-178900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "force-systemd-flag-178900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-178900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-178900: (1m0.2479095s)
--- FAIL: TestForceSystemdFlag (638.99s)

                                                
                                    
x
+
TestErrorSpam/setup (187.49s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-906500 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 --driver=hyperv
E0428 16:20:36.419718    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 16:20:36.435888    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 16:20:36.463498    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 16:20:36.490581    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 16:20:36.533708    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 16:20:36.624886    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 16:20:36.791932    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 16:20:37.116374    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 16:20:37.757497    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 16:20:39.048221    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 16:20:41.610777    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 16:20:46.746927    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 16:20:56.993409    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 16:21:17.474115    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 16:21:58.434707    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 16:23:20.355084    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-906500 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 --driver=hyperv: (3m7.4845557s)
error_spam_test.go:96: unexpected stderr: "W0428 16:20:14.434770   14996 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-906500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
- KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
- MINIKUBE_LOCATION=17977
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-906500" primary control-plane node in "nospam-906500" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-906500" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0428 16:20:14.434770   14996 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (187.49s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (32.67s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-285400 -n functional-285400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-285400 -n functional-285400: (11.5554635s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 logs -n 25: (8.3457878s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-906500 --log_dir                                     | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:24 PDT | 28 Apr 24 16:24 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-906500 --log_dir                                     | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:24 PDT | 28 Apr 24 16:24 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-906500 --log_dir                                     | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:24 PDT | 28 Apr 24 16:24 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-906500 --log_dir                                     | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:24 PDT | 28 Apr 24 16:24 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-906500 --log_dir                                     | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:24 PDT | 28 Apr 24 16:25 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-906500 --log_dir                                     | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:25 PDT | 28 Apr 24 16:25 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-906500 --log_dir                                     | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:25 PDT | 28 Apr 24 16:25 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-906500                                            | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:25 PDT | 28 Apr 24 16:26 PDT |
	| start   | -p functional-285400                                        | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:26 PDT | 28 Apr 24 16:30 PDT |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-285400                                        | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:30 PDT | 28 Apr 24 16:32 PDT |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache add                                 | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache add                                 | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache add                                 | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache add                                 | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | minikube-local-cache-test:functional-285400                 |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache delete                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | minikube-local-cache-test:functional-285400                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	| ssh     | functional-285400 ssh sudo                                  | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:33 PDT |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-285400                                           | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-285400 ssh                                       | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache reload                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	| ssh     | functional-285400 ssh                                       | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-285400 kubectl --                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|         | --context functional-285400                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 16:30:09
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 16:30:09.459705    4440 out.go:291] Setting OutFile to fd 944 ...
	I0428 16:30:09.460275    4440 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:30:09.460275    4440 out.go:304] Setting ErrFile to fd 912...
	I0428 16:30:09.460275    4440 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:30:09.483777    4440 out.go:298] Setting JSON to false
	I0428 16:30:09.488329    4440 start.go:129] hostinfo: {"hostname":"minikube1","uptime":4452,"bootTime":1714342556,"procs":203,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 16:30:09.489014    4440 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 16:30:09.492585    4440 out.go:177] * [functional-285400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 16:30:09.496081    4440 notify.go:220] Checking for updates...
	I0428 16:30:09.498052    4440 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 16:30:09.500467    4440 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 16:30:09.502120    4440 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 16:30:09.505412    4440 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 16:30:09.507802    4440 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 16:30:09.509990    4440 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 16:30:09.511012    4440 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 16:30:14.716181    4440 out.go:177] * Using the hyperv driver based on existing profile
	I0428 16:30:14.720406    4440 start.go:297] selected driver: hyperv
	I0428 16:30:14.720406    4440 start.go:901] validating driver "hyperv" against &{Name:functional-285400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:functional-285400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.228.231 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 16:30:14.720805    4440 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 16:30:14.771824    4440 cni.go:84] Creating CNI manager for ""
	I0428 16:30:14.771824    4440 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0428 16:30:14.771824    4440 start.go:340] cluster config:
	{Name:functional-285400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-285400 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.228.231 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 16:30:14.772419    4440 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 16:30:14.776859    4440 out.go:177] * Starting "functional-285400" primary control-plane node in "functional-285400" cluster
	I0428 16:30:14.778803    4440 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 16:30:14.779772    4440 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 16:30:14.779772    4440 cache.go:56] Caching tarball of preloaded images
	I0428 16:30:14.779772    4440 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 16:30:14.779772    4440 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 16:30:14.779772    4440 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\config.json ...
	I0428 16:30:14.782128    4440 start.go:360] acquireMachinesLock for functional-285400: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 16:30:14.783137    4440 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-285400"
	I0428 16:30:14.783194    4440 start.go:96] Skipping create...Using existing machine configuration
	I0428 16:30:14.783194    4440 fix.go:54] fixHost starting: 
	I0428 16:30:14.783194    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:30:17.405483    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:30:17.405783    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:17.405783    4440 fix.go:112] recreateIfNeeded on functional-285400: state=Running err=<nil>
	W0428 16:30:17.405909    4440 fix.go:138] unexpected machine state, will restart: <nil>
	I0428 16:30:17.409208    4440 out.go:177] * Updating the running hyperv "functional-285400" VM ...
	I0428 16:30:17.411854    4440 machine.go:94] provisionDockerMachine start ...
	I0428 16:30:17.412033    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:30:19.508149    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:30:19.508149    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:19.508419    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:30:22.037717    4440 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:30:22.038322    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:22.044455    4440 main.go:141] libmachine: Using SSH client type: native
	I0428 16:30:22.044603    4440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:30:22.044603    4440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 16:30:22.187121    4440 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-285400
	
	I0428 16:30:22.187203    4440 buildroot.go:166] provisioning hostname "functional-285400"
	I0428 16:30:22.187284    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:30:24.318507    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:30:24.318569    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:24.318569    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:30:26.800038    4440 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:30:26.800038    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:26.806732    4440 main.go:141] libmachine: Using SSH client type: native
	I0428 16:30:26.807460    4440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:30:26.807460    4440 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-285400 && echo "functional-285400" | sudo tee /etc/hostname
	I0428 16:30:26.966421    4440 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-285400
	
	I0428 16:30:26.966421    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:30:29.003088    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:30:29.003088    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:29.003972    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:30:31.590994    4440 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:30:31.590994    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:31.598620    4440 main.go:141] libmachine: Using SSH client type: native
	I0428 16:30:31.598620    4440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:30:31.598620    4440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-285400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-285400/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-285400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 16:30:31.734143    4440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 16:30:31.734143    4440 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 16:30:31.734143    4440 buildroot.go:174] setting up certificates
	I0428 16:30:31.734143    4440 provision.go:84] configureAuth start
	I0428 16:30:31.734345    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:30:33.890918    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:30:33.891568    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:33.891568    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:30:36.373046    4440 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:30:36.373508    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:36.373639    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:30:38.436065    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:30:38.436065    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:38.436247    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:30:40.906259    4440 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:30:40.906498    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:40.906498    4440 provision.go:143] copyHostCerts
	I0428 16:30:40.906498    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 16:30:40.906498    4440 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 16:30:40.906498    4440 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 16:30:40.907294    4440 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 16:30:40.908473    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 16:30:40.908718    4440 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 16:30:40.908718    4440 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 16:30:40.909426    4440 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 16:30:40.910664    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 16:30:40.911281    4440 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 16:30:40.911319    4440 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 16:30:40.911371    4440 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 16:30:40.912616    4440 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-285400 san=[127.0.0.1 172.27.228.231 functional-285400 localhost minikube]
	I0428 16:30:41.284309    4440 provision.go:177] copyRemoteCerts
	I0428 16:30:41.296303    4440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 16:30:41.296303    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:30:43.335850    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:30:43.335850    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:43.335850    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:30:45.809148    4440 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:30:45.809231    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:45.809231    4440 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
	I0428 16:30:45.911006    4440 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6146974s)
	I0428 16:30:45.911006    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 16:30:45.911006    4440 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 16:30:45.961006    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 16:30:45.961209    4440 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0428 16:30:46.014166    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 16:30:46.014166    4440 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0428 16:30:46.078907    4440 provision.go:87] duration metric: took 14.3447445s to configureAuth
	I0428 16:30:46.078907    4440 buildroot.go:189] setting minikube options for container-runtime
	I0428 16:30:46.079536    4440 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 16:30:46.079536    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:30:48.113667    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:30:48.113990    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:48.113990    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:30:50.594929    4440 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:30:50.594929    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:50.603218    4440 main.go:141] libmachine: Using SSH client type: native
	I0428 16:30:50.603218    4440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:30:50.604153    4440 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 16:30:50.726241    4440 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 16:30:50.726241    4440 buildroot.go:70] root file system type: tmpfs
	I0428 16:30:50.726241    4440 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 16:30:50.726241    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:30:52.846097    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:30:52.846199    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:52.846279    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:30:55.328392    4440 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:30:55.328616    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:55.335250    4440 main.go:141] libmachine: Using SSH client type: native
	I0428 16:30:55.335250    4440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:30:55.335947    4440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 16:30:55.497932    4440 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 16:30:55.498069    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:30:57.550756    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:30:57.550756    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:30:57.550756    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:31:00.026034    4440 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:31:00.026191    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:31:00.032524    4440 main.go:141] libmachine: Using SSH client type: native
	I0428 16:31:00.033118    4440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:31:00.033118    4440 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 16:31:00.196177    4440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 16:31:00.196177    4440 machine.go:97] duration metric: took 42.7842667s to provisionDockerMachine
	I0428 16:31:00.196177    4440 start.go:293] postStartSetup for "functional-285400" (driver="hyperv")
	I0428 16:31:00.196177    4440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 16:31:00.211350    4440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 16:31:00.211441    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:31:02.238507    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:31:02.238507    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:31:02.238507    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:31:04.720636    4440 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:31:04.721406    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:31:04.721471    4440 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
	I0428 16:31:04.824196    4440 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6126764s)
	I0428 16:31:04.841131    4440 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 16:31:04.847773    4440 command_runner.go:130] > NAME=Buildroot
	I0428 16:31:04.847773    4440 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0428 16:31:04.847773    4440 command_runner.go:130] > ID=buildroot
	I0428 16:31:04.847773    4440 command_runner.go:130] > VERSION_ID=2023.02.9
	I0428 16:31:04.848321    4440 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0428 16:31:04.848518    4440 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 16:31:04.848632    4440 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 16:31:04.849123    4440 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 16:31:04.849900    4440 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 16:31:04.849900    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 16:31:04.851066    4440 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\3228\hosts -> hosts in /etc/test/nested/copy/3228
	I0428 16:31:04.851066    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\3228\hosts -> /etc/test/nested/copy/3228/hosts
	I0428 16:31:04.862275    4440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/3228
	I0428 16:31:04.879964    4440 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 16:31:04.928926    4440 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\3228\hosts --> /etc/test/nested/copy/3228/hosts (40 bytes)
	I0428 16:31:04.976858    4440 start.go:296] duration metric: took 4.7806742s for postStartSetup
	I0428 16:31:04.976858    4440 fix.go:56] duration metric: took 50.1935965s for fixHost
	I0428 16:31:04.976858    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:31:07.032876    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:31:07.033475    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:31:07.033475    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:31:09.521781    4440 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:31:09.522359    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:31:09.527929    4440 main.go:141] libmachine: Using SSH client type: native
	I0428 16:31:09.528659    4440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:31:09.528659    4440 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 16:31:09.660806    4440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714347069.656599088
	
	I0428 16:31:09.660871    4440 fix.go:216] guest clock: 1714347069.656599088
	I0428 16:31:09.660871    4440 fix.go:229] Guest: 2024-04-28 16:31:09.656599088 -0700 PDT Remote: 2024-04-28 16:31:04.9768583 -0700 PDT m=+55.628685801 (delta=4.679740788s)
	I0428 16:31:09.661018    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:31:11.701851    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:31:11.702064    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:31:11.702112    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:31:14.205232    4440 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:31:14.205624    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:31:14.212329    4440 main.go:141] libmachine: Using SSH client type: native
	I0428 16:31:14.213109    4440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:31:14.213109    4440 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714347069
	I0428 16:31:14.364275    4440 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 28 23:31:09 UTC 2024
	
	I0428 16:31:14.364275    4440 fix.go:236] clock set: Sun Apr 28 23:31:09 UTC 2024
	 (err=<nil>)
	I0428 16:31:14.364275    4440 start.go:83] releasing machines lock for "functional-285400", held for 59.5810569s
	I0428 16:31:14.364275    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:31:16.408791    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:31:16.408791    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:31:16.409276    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:31:18.912842    4440 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:31:18.912842    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:31:18.916936    4440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 16:31:18.916936    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:31:18.931419    4440 ssh_runner.go:195] Run: cat /version.json
	I0428 16:31:18.931419    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:31:21.008339    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:31:21.008809    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:31:21.008809    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:31:21.011792    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:31:21.011792    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:31:21.011792    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:31:23.619531    4440 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:31:23.619531    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:31:23.619798    4440 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
	I0428 16:31:23.656352    4440 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:31:23.656352    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:31:23.656792    4440 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
	I0428 16:31:23.713586    4440 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0428 16:31:23.713586    4440 ssh_runner.go:235] Completed: cat /version.json: (4.7821595s)
	I0428 16:31:23.726646    4440 ssh_runner.go:195] Run: systemctl --version
	I0428 16:31:23.950047    4440 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0428 16:31:23.950047    4440 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0331036s)
	I0428 16:31:23.950176    4440 command_runner.go:130] > systemd 252 (252)
	I0428 16:31:23.950176    4440 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0428 16:31:23.962928    4440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 16:31:23.973046    4440 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0428 16:31:23.974226    4440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 16:31:23.984587    4440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 16:31:24.004254    4440 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0428 16:31:24.004254    4440 start.go:494] detecting cgroup driver to use...
	I0428 16:31:24.004566    4440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 16:31:24.043243    4440 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0428 16:31:24.056320    4440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 16:31:24.093553    4440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 16:31:24.113901    4440 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 16:31:24.127120    4440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 16:31:24.163978    4440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 16:31:24.197386    4440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 16:31:24.232921    4440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 16:31:24.270519    4440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 16:31:24.305335    4440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 16:31:24.337308    4440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 16:31:24.374336    4440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 16:31:24.411949    4440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 16:31:24.431813    4440 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0428 16:31:24.444834    4440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 16:31:24.477946    4440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 16:31:24.767058    4440 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 16:31:24.800764    4440 start.go:494] detecting cgroup driver to use...
	I0428 16:31:24.813681    4440 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 16:31:24.840737    4440 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0428 16:31:24.841368    4440 command_runner.go:130] > [Unit]
	I0428 16:31:24.841368    4440 command_runner.go:130] > Description=Docker Application Container Engine
	I0428 16:31:24.841368    4440 command_runner.go:130] > Documentation=https://docs.docker.com
	I0428 16:31:24.841443    4440 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0428 16:31:24.841443    4440 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0428 16:31:24.841443    4440 command_runner.go:130] > StartLimitBurst=3
	I0428 16:31:24.841443    4440 command_runner.go:130] > StartLimitIntervalSec=60
	I0428 16:31:24.841443    4440 command_runner.go:130] > [Service]
	I0428 16:31:24.841443    4440 command_runner.go:130] > Type=notify
	I0428 16:31:24.841443    4440 command_runner.go:130] > Restart=on-failure
	I0428 16:31:24.841524    4440 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0428 16:31:24.841554    4440 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0428 16:31:24.841554    4440 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0428 16:31:24.841554    4440 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0428 16:31:24.841554    4440 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0428 16:31:24.841597    4440 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0428 16:31:24.841633    4440 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0428 16:31:24.841633    4440 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0428 16:31:24.841633    4440 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0428 16:31:24.841633    4440 command_runner.go:130] > ExecStart=
	I0428 16:31:24.841633    4440 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0428 16:31:24.841633    4440 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0428 16:31:24.841633    4440 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0428 16:31:24.841633    4440 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0428 16:31:24.841633    4440 command_runner.go:130] > LimitNOFILE=infinity
	I0428 16:31:24.841633    4440 command_runner.go:130] > LimitNPROC=infinity
	I0428 16:31:24.841633    4440 command_runner.go:130] > LimitCORE=infinity
	I0428 16:31:24.841633    4440 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0428 16:31:24.841633    4440 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0428 16:31:24.841633    4440 command_runner.go:130] > TasksMax=infinity
	I0428 16:31:24.841633    4440 command_runner.go:130] > TimeoutStartSec=0
	I0428 16:31:24.841633    4440 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0428 16:31:24.841633    4440 command_runner.go:130] > Delegate=yes
	I0428 16:31:24.841633    4440 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0428 16:31:24.841633    4440 command_runner.go:130] > KillMode=process
	I0428 16:31:24.841633    4440 command_runner.go:130] > [Install]
	I0428 16:31:24.841633    4440 command_runner.go:130] > WantedBy=multi-user.target
	I0428 16:31:24.853641    4440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 16:31:24.889644    4440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 16:31:24.935775    4440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 16:31:24.978430    4440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 16:31:25.007354    4440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 16:31:25.047870    4440 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0428 16:31:25.066558    4440 ssh_runner.go:195] Run: which cri-dockerd
	I0428 16:31:25.073441    4440 command_runner.go:130] > /usr/bin/cri-dockerd
	I0428 16:31:25.085746    4440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 16:31:25.104377    4440 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 16:31:25.149086    4440 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 16:31:25.444429    4440 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 16:31:25.701349    4440 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 16:31:25.701713    4440 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 16:31:25.746301    4440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 16:31:26.018214    4440 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 16:31:38.885145    4440 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.8659108s)
	I0428 16:31:38.896149    4440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0428 16:31:38.939157    4440 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0428 16:31:38.993131    4440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 16:31:39.032763    4440 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0428 16:31:39.258380    4440 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0428 16:31:39.462605    4440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 16:31:39.672744    4440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0428 16:31:39.721742    4440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 16:31:39.762243    4440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 16:31:39.980035    4440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0428 16:31:40.105527    4440 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0428 16:31:40.117503    4440 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0428 16:31:40.126488    4440 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0428 16:31:40.126488    4440 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0428 16:31:40.126701    4440 command_runner.go:130] > Device: 0,22	Inode: 1507        Links: 1
	I0428 16:31:40.126701    4440 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0428 16:31:40.126701    4440 command_runner.go:130] > Access: 2024-04-28 23:31:39.996717910 +0000
	I0428 16:31:40.126701    4440 command_runner.go:130] > Modify: 2024-04-28 23:31:39.996717910 +0000
	I0428 16:31:40.126701    4440 command_runner.go:130] > Change: 2024-04-28 23:31:39.999717374 +0000
	I0428 16:31:40.126701    4440 command_runner.go:130] >  Birth: -
	I0428 16:31:40.127348    4440 start.go:562] Will wait 60s for crictl version
	I0428 16:31:40.139266    4440 ssh_runner.go:195] Run: which crictl
	I0428 16:31:40.144281    4440 command_runner.go:130] > /usr/bin/crictl
	I0428 16:31:40.160331    4440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 16:31:40.215544    4440 command_runner.go:130] > Version:  0.1.0
	I0428 16:31:40.215544    4440 command_runner.go:130] > RuntimeName:  docker
	I0428 16:31:40.215544    4440 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0428 16:31:40.215544    4440 command_runner.go:130] > RuntimeApiVersion:  v1
	I0428 16:31:40.215663    4440 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0428 16:31:40.225020    4440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 16:31:40.256677    4440 command_runner.go:130] > 26.0.2
	I0428 16:31:40.271681    4440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 16:31:40.303599    4440 command_runner.go:130] > 26.0.2
	I0428 16:31:40.308302    4440 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0428 16:31:40.308520    4440 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0428 16:31:40.312724    4440 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0428 16:31:40.312724    4440 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0428 16:31:40.312724    4440 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0428 16:31:40.312724    4440 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d9:46:b5 Flags:up|broadcast|multicast|running}
	I0428 16:31:40.316257    4440 ip.go:210] interface addr: fe80::ddcc:c7f1:f829:ae2f/64
	I0428 16:31:40.316257    4440 ip.go:210] interface addr: 172.27.224.1/20
	I0428 16:31:40.327694    4440 ssh_runner.go:195] Run: grep 172.27.224.1	host.minikube.internal$ /etc/hosts
	I0428 16:31:40.336087    4440 command_runner.go:130] > 172.27.224.1	host.minikube.internal
	I0428 16:31:40.336215    4440 kubeadm.go:877] updating cluster {Name:functional-285400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.0 ClusterName:functional-285400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.228.231 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 16:31:40.336215    4440 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 16:31:40.346605    4440 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 16:31:40.368167    4440 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0428 16:31:40.368167    4440 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0428 16:31:40.368167    4440 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0428 16:31:40.368167    4440 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0428 16:31:40.368167    4440 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0428 16:31:40.368167    4440 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0428 16:31:40.369157    4440 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0428 16:31:40.369157    4440 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 16:31:40.369157    4440 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0428 16:31:40.369157    4440 docker.go:615] Images already preloaded, skipping extraction
	I0428 16:31:40.378157    4440 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 16:31:40.400189    4440 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0428 16:31:40.400744    4440 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0428 16:31:40.400744    4440 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0428 16:31:40.400744    4440 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0428 16:31:40.400744    4440 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0428 16:31:40.400744    4440 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0428 16:31:40.400744    4440 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0428 16:31:40.400744    4440 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 16:31:40.401793    4440 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0428 16:31:40.401876    4440 cache_images.go:84] Images are preloaded, skipping loading
	I0428 16:31:40.401980    4440 kubeadm.go:928] updating node { 172.27.228.231 8441 v1.30.0 docker true true} ...
	I0428 16:31:40.402011    4440 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-285400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.228.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:functional-285400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 16:31:40.411266    4440 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0428 16:31:40.449231    4440 command_runner.go:130] > cgroupfs
	I0428 16:31:40.450237    4440 cni.go:84] Creating CNI manager for ""
	I0428 16:31:40.450237    4440 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0428 16:31:40.450237    4440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 16:31:40.450237    4440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.228.231 APIServerPort:8441 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-285400 NodeName:functional-285400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.228.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.228.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 16:31:40.450237    4440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.228.231
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-285400"
	  kubeletExtraArgs:
	    node-ip: 172.27.228.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.228.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 16:31:40.462227    4440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 16:31:40.480553    4440 command_runner.go:130] > kubeadm
	I0428 16:31:40.480553    4440 command_runner.go:130] > kubectl
	I0428 16:31:40.480553    4440 command_runner.go:130] > kubelet
	I0428 16:31:40.480663    4440 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 16:31:40.497393    4440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0428 16:31:40.516406    4440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0428 16:31:40.548421    4440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 16:31:40.580100    4440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0428 16:31:40.625974    4440 ssh_runner.go:195] Run: grep 172.27.228.231	control-plane.minikube.internal$ /etc/hosts
	I0428 16:31:40.631900    4440 command_runner.go:130] > 172.27.228.231	control-plane.minikube.internal
	I0428 16:31:40.643529    4440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 16:31:40.860387    4440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 16:31:40.885569    4440 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400 for IP: 172.27.228.231
	I0428 16:31:40.885569    4440 certs.go:194] generating shared ca certs ...
	I0428 16:31:40.885764    4440 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:31:40.886578    4440 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0428 16:31:40.887000    4440 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0428 16:31:40.887000    4440 certs.go:256] generating profile certs ...
	I0428 16:31:40.888055    4440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.key
	I0428 16:31:40.888962    4440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\apiserver.key.8aec9f7f
	I0428 16:31:40.889364    4440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\proxy-client.key
	I0428 16:31:40.889364    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 16:31:40.889364    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0428 16:31:40.889364    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 16:31:40.890154    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 16:31:40.890215    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 16:31:40.890215    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 16:31:40.890215    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 16:31:40.890770    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 16:31:40.891082    4440 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem (1338 bytes)
	W0428 16:31:40.891833    4440 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228_empty.pem, impossibly tiny 0 bytes
	I0428 16:31:40.891989    4440 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0428 16:31:40.891989    4440 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0428 16:31:40.892913    4440 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0428 16:31:40.893442    4440 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0428 16:31:40.894437    4440 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem (1708 bytes)
	I0428 16:31:40.894511    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem -> /usr/share/ca-certificates/3228.pem
	I0428 16:31:40.894511    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /usr/share/ca-certificates/32282.pem
	I0428 16:31:40.894511    4440 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 16:31:40.896671    4440 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 16:31:40.947740    4440 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0428 16:31:41.004066    4440 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 16:31:41.108697    4440 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 16:31:41.211165    4440 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0428 16:31:41.271576    4440 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0428 16:31:41.331926    4440 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 16:31:41.387292    4440 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0428 16:31:41.448490    4440 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem --> /usr/share/ca-certificates/3228.pem (1338 bytes)
	I0428 16:31:41.505055    4440 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /usr/share/ca-certificates/32282.pem (1708 bytes)
	I0428 16:31:41.555632    4440 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 16:31:41.611801    4440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 16:31:41.662801    4440 ssh_runner.go:195] Run: openssl version
	I0428 16:31:41.673345    4440 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0428 16:31:41.687885    4440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 16:31:41.743695    4440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 16:31:41.752457    4440 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 16:31:41.752457    4440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 16:31:41.766516    4440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 16:31:41.775292    4440 command_runner.go:130] > b5213941
	I0428 16:31:41.788939    4440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 16:31:41.825753    4440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3228.pem && ln -fs /usr/share/ca-certificates/3228.pem /etc/ssl/certs/3228.pem"
	I0428 16:31:41.867972    4440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3228.pem
	I0428 16:31:41.875992    4440 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 16:31:41.876164    4440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 16:31:41.890031    4440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3228.pem
	I0428 16:31:41.900050    4440 command_runner.go:130] > 51391683
	I0428 16:31:41.913694    4440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3228.pem /etc/ssl/certs/51391683.0"
	I0428 16:31:41.959380    4440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32282.pem && ln -fs /usr/share/ca-certificates/32282.pem /etc/ssl/certs/32282.pem"
	I0428 16:31:41.998496    4440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32282.pem
	I0428 16:31:42.005940    4440 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 16:31:42.005940    4440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 16:31:42.017488    4440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32282.pem
	I0428 16:31:42.026952    4440 command_runner.go:130] > 3ec20f2e
	I0428 16:31:42.040537    4440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32282.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 16:31:42.081337    4440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 16:31:42.109015    4440 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 16:31:42.109015    4440 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0428 16:31:42.109015    4440 command_runner.go:130] > Device: 8,1	Inode: 2102098     Links: 1
	I0428 16:31:42.109015    4440 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0428 16:31:42.109015    4440 command_runner.go:130] > Access: 2024-04-28 23:29:00.488799756 +0000
	I0428 16:31:42.109015    4440 command_runner.go:130] > Modify: 2024-04-28 23:29:00.488799756 +0000
	I0428 16:31:42.109015    4440 command_runner.go:130] > Change: 2024-04-28 23:29:00.488799756 +0000
	I0428 16:31:42.109015    4440 command_runner.go:130] >  Birth: 2024-04-28 23:29:00.488799756 +0000
	I0428 16:31:42.125904    4440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0428 16:31:42.146952    4440 command_runner.go:130] > Certificate will not expire
	I0428 16:31:42.160767    4440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0428 16:31:42.169712    4440 command_runner.go:130] > Certificate will not expire
	I0428 16:31:42.183854    4440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0428 16:31:42.193174    4440 command_runner.go:130] > Certificate will not expire
	I0428 16:31:42.207147    4440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0428 16:31:42.219112    4440 command_runner.go:130] > Certificate will not expire
	I0428 16:31:42.234573    4440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0428 16:31:42.243512    4440 command_runner.go:130] > Certificate will not expire
	I0428 16:31:42.260201    4440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0428 16:31:42.270277    4440 command_runner.go:130] > Certificate will not expire
	I0428 16:31:42.270912    4440 kubeadm.go:391] StartCluster: {Name:functional-285400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:functional-285400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.228.231 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 16:31:42.284439    4440 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 16:31:42.325755    4440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 16:31:42.346114    4440 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0428 16:31:42.346209    4440 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0428 16:31:42.346209    4440 command_runner.go:130] > /var/lib/minikube/etcd:
	I0428 16:31:42.346209    4440 command_runner.go:130] > member
	W0428 16:31:42.346313    4440 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0428 16:31:42.346313    4440 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0428 16:31:42.346420    4440 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0428 16:31:42.360236    4440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0428 16:31:42.387644    4440 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0428 16:31:42.388233    4440 kubeconfig.go:125] found "functional-285400" server: "https://172.27.228.231:8441"
	I0428 16:31:42.389278    4440 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 16:31:42.390258    4440 kapi.go:59] client config for functional-285400: &rest.Config{Host:"https://172.27.228.231:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-285400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-285400\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 16:31:42.391246    4440 cert_rotation.go:137] Starting client certificate rotation controller
	I0428 16:31:42.404926    4440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0428 16:31:42.430961    4440 kubeadm.go:624] The running cluster does not require reconfiguration: 172.27.228.231
	I0428 16:31:42.431140    4440 kubeadm.go:1154] stopping kube-system containers ...
	I0428 16:31:42.441632    4440 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 16:31:42.552654    4440 command_runner.go:130] > d944ce960b21
	I0428 16:31:42.552654    4440 command_runner.go:130] > 0a13487c372a
	I0428 16:31:42.552654    4440 command_runner.go:130] > 433fcffb54c9
	I0428 16:31:42.552654    4440 command_runner.go:130] > 9d14cad0dcbb
	I0428 16:31:42.552654    4440 command_runner.go:130] > 9d061e1398da
	I0428 16:31:42.552654    4440 command_runner.go:130] > b37acf5d4707
	I0428 16:31:42.552654    4440 command_runner.go:130] > d57ac6a87327
	I0428 16:31:42.552654    4440 command_runner.go:130] > cd5d493f46dd
	I0428 16:31:42.552654    4440 command_runner.go:130] > d09c631e65fb
	I0428 16:31:42.552654    4440 command_runner.go:130] > 20f0bf4e0821
	I0428 16:31:42.552654    4440 command_runner.go:130] > 7ac449c20d9c
	I0428 16:31:42.552654    4440 command_runner.go:130] > 8f29a8fbd5b2
	I0428 16:31:42.552654    4440 command_runner.go:130] > 36a11974a0fd
	I0428 16:31:42.552654    4440 command_runner.go:130] > cbf5b97235b0
	I0428 16:31:42.552654    4440 command_runner.go:130] > d60d61f62904
	I0428 16:31:42.552654    4440 command_runner.go:130] > 917e469fc278
	I0428 16:31:42.552654    4440 command_runner.go:130] > 3291d76a665c
	I0428 16:31:42.552654    4440 command_runner.go:130] > 76cb8f18544b
	I0428 16:31:42.552654    4440 command_runner.go:130] > e945fb6ccd0b
	I0428 16:31:42.552654    4440 command_runner.go:130] > 393441639d88
	I0428 16:31:42.552654    4440 command_runner.go:130] > 7c1efde2e1d0
	I0428 16:31:42.552654    4440 command_runner.go:130] > 86ed10ca148a
	I0428 16:31:42.552654    4440 command_runner.go:130] > 4142c8b3542b
	I0428 16:31:42.552654    4440 command_runner.go:130] > d4f34492bd3b
	I0428 16:31:42.552654    4440 command_runner.go:130] > 0df4de5342ba
	I0428 16:31:42.552654    4440 docker.go:483] Stopping containers: [d944ce960b21 0a13487c372a 433fcffb54c9 9d14cad0dcbb 9d061e1398da b37acf5d4707 d57ac6a87327 cd5d493f46dd d09c631e65fb 20f0bf4e0821 7ac449c20d9c 8f29a8fbd5b2 36a11974a0fd cbf5b97235b0 d60d61f62904 917e469fc278 3291d76a665c 76cb8f18544b e945fb6ccd0b 393441639d88 7c1efde2e1d0 86ed10ca148a 4142c8b3542b d4f34492bd3b 0df4de5342ba]
	I0428 16:31:42.564566    4440 ssh_runner.go:195] Run: docker stop d944ce960b21 0a13487c372a 433fcffb54c9 9d14cad0dcbb 9d061e1398da b37acf5d4707 d57ac6a87327 cd5d493f46dd d09c631e65fb 20f0bf4e0821 7ac449c20d9c 8f29a8fbd5b2 36a11974a0fd cbf5b97235b0 d60d61f62904 917e469fc278 3291d76a665c 76cb8f18544b e945fb6ccd0b 393441639d88 7c1efde2e1d0 86ed10ca148a 4142c8b3542b d4f34492bd3b 0df4de5342ba
	I0428 16:31:44.026082    4440 command_runner.go:130] > d944ce960b21
	I0428 16:31:44.026082    4440 command_runner.go:130] > 0a13487c372a
	I0428 16:31:44.026082    4440 command_runner.go:130] > 433fcffb54c9
	I0428 16:31:44.026082    4440 command_runner.go:130] > 9d14cad0dcbb
	I0428 16:31:44.026082    4440 command_runner.go:130] > 9d061e1398da
	I0428 16:31:44.026082    4440 command_runner.go:130] > b37acf5d4707
	I0428 16:31:44.026082    4440 command_runner.go:130] > d57ac6a87327
	I0428 16:31:44.026082    4440 command_runner.go:130] > cd5d493f46dd
	I0428 16:31:44.026082    4440 command_runner.go:130] > d09c631e65fb
	I0428 16:31:44.026082    4440 command_runner.go:130] > 20f0bf4e0821
	I0428 16:31:44.026082    4440 command_runner.go:130] > 7ac449c20d9c
	I0428 16:31:44.026082    4440 command_runner.go:130] > 8f29a8fbd5b2
	I0428 16:31:44.026082    4440 command_runner.go:130] > 36a11974a0fd
	I0428 16:31:44.026082    4440 command_runner.go:130] > cbf5b97235b0
	I0428 16:31:44.026082    4440 command_runner.go:130] > d60d61f62904
	I0428 16:31:44.026082    4440 command_runner.go:130] > 917e469fc278
	I0428 16:31:44.026082    4440 command_runner.go:130] > 3291d76a665c
	I0428 16:31:44.026082    4440 command_runner.go:130] > 76cb8f18544b
	I0428 16:31:44.026082    4440 command_runner.go:130] > e945fb6ccd0b
	I0428 16:31:44.026082    4440 command_runner.go:130] > 393441639d88
	I0428 16:31:44.026082    4440 command_runner.go:130] > 7c1efde2e1d0
	I0428 16:31:44.026082    4440 command_runner.go:130] > 86ed10ca148a
	I0428 16:31:44.026082    4440 command_runner.go:130] > 4142c8b3542b
	I0428 16:31:44.026082    4440 command_runner.go:130] > d4f34492bd3b
	I0428 16:31:44.026082    4440 command_runner.go:130] > 0df4de5342ba
	I0428 16:31:44.026082    4440 ssh_runner.go:235] Completed: docker stop d944ce960b21 0a13487c372a 433fcffb54c9 9d14cad0dcbb 9d061e1398da b37acf5d4707 d57ac6a87327 cd5d493f46dd d09c631e65fb 20f0bf4e0821 7ac449c20d9c 8f29a8fbd5b2 36a11974a0fd cbf5b97235b0 d60d61f62904 917e469fc278 3291d76a665c 76cb8f18544b e945fb6ccd0b 393441639d88 7c1efde2e1d0 86ed10ca148a 4142c8b3542b d4f34492bd3b 0df4de5342ba: (1.4615132s)
	I0428 16:31:44.040030    4440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0428 16:31:44.129004    4440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 16:31:44.146875    4440 command_runner.go:130] > -rw------- 1 root root 5651 Apr 28 23:29 /etc/kubernetes/admin.conf
	I0428 16:31:44.146875    4440 command_runner.go:130] > -rw------- 1 root root 5658 Apr 28 23:29 /etc/kubernetes/controller-manager.conf
	I0428 16:31:44.146875    4440 command_runner.go:130] > -rw------- 1 root root 2007 Apr 28 23:29 /etc/kubernetes/kubelet.conf
	I0428 16:31:44.146875    4440 command_runner.go:130] > -rw------- 1 root root 5606 Apr 28 23:29 /etc/kubernetes/scheduler.conf
	I0428 16:31:44.146875    4440 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 Apr 28 23:29 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Apr 28 23:29 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Apr 28 23:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Apr 28 23:29 /etc/kubernetes/scheduler.conf
	
	I0428 16:31:44.159058    4440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0428 16:31:44.176921    4440 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0428 16:31:44.187909    4440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0428 16:31:44.203802    4440 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0428 16:31:44.216136    4440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0428 16:31:44.233104    4440 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0428 16:31:44.244146    4440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 16:31:44.276725    4440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0428 16:31:44.295985    4440 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0428 16:31:44.308186    4440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 16:31:44.338966    4440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 16:31:44.356232    4440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0428 16:31:44.436067    4440 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 16:31:44.436067    4440 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0428 16:31:44.436067    4440 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0428 16:31:44.436067    4440 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0428 16:31:44.436229    4440 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0428 16:31:44.436229    4440 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0428 16:31:44.436229    4440 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0428 16:31:44.436229    4440 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0428 16:31:44.436229    4440 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0428 16:31:44.436331    4440 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0428 16:31:44.436331    4440 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0428 16:31:44.436396    4440 command_runner.go:130] > [certs] Using the existing "sa" key
	I0428 16:31:44.436425    4440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0428 16:31:45.265132    4440 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 16:31:45.265236    4440 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0428 16:31:45.265280    4440 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0428 16:31:45.265280    4440 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0428 16:31:45.265312    4440 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 16:31:45.265312    4440 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 16:31:45.265462    4440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0428 16:31:45.603496    4440 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 16:31:45.603496    4440 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 16:31:45.603496    4440 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0428 16:31:45.603496    4440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0428 16:31:45.709866    4440 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 16:31:45.709906    4440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 16:31:45.709906    4440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 16:31:45.709906    4440 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 16:31:45.710086    4440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0428 16:31:45.816247    4440 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 16:31:45.816403    4440 api_server.go:52] waiting for apiserver process to appear ...
	I0428 16:31:45.829089    4440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 16:31:46.345207    4440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 16:31:46.835082    4440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 16:31:47.340662    4440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 16:31:47.409419    4440 command_runner.go:130] > 5745
	I0428 16:31:47.410249    4440 api_server.go:72] duration metric: took 1.5939109s to wait for apiserver process to appear ...
	I0428 16:31:47.410332    4440 api_server.go:88] waiting for apiserver healthz status ...
	I0428 16:31:47.410396    4440 api_server.go:253] Checking apiserver healthz at https://172.27.228.231:8441/healthz ...
	I0428 16:31:50.659645    4440 api_server.go:279] https://172.27.228.231:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0428 16:31:50.659967    4440 api_server.go:103] status: https://172.27.228.231:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0428 16:31:50.660011    4440 api_server.go:253] Checking apiserver healthz at https://172.27.228.231:8441/healthz ...
	I0428 16:31:50.670549    4440 api_server.go:279] https://172.27.228.231:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0428 16:31:50.670856    4440 api_server.go:103] status: https://172.27.228.231:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0428 16:31:50.924745    4440 api_server.go:253] Checking apiserver healthz at https://172.27.228.231:8441/healthz ...
	I0428 16:31:50.935423    4440 api_server.go:279] https://172.27.228.231:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0428 16:31:50.935423    4440 api_server.go:103] status: https://172.27.228.231:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0428 16:31:51.416938    4440 api_server.go:253] Checking apiserver healthz at https://172.27.228.231:8441/healthz ...
	I0428 16:31:51.424314    4440 api_server.go:279] https://172.27.228.231:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0428 16:31:51.424314    4440 api_server.go:103] status: https://172.27.228.231:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0428 16:31:51.922352    4440 api_server.go:253] Checking apiserver healthz at https://172.27.228.231:8441/healthz ...
	I0428 16:31:51.930135    4440 api_server.go:279] https://172.27.228.231:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0428 16:31:51.930135    4440 api_server.go:103] status: https://172.27.228.231:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0428 16:31:52.421945    4440 api_server.go:253] Checking apiserver healthz at https://172.27.228.231:8441/healthz ...
	I0428 16:31:52.429573    4440 api_server.go:279] https://172.27.228.231:8441/healthz returned 200:
	ok
	I0428 16:31:52.429970    4440 round_trippers.go:463] GET https://172.27.228.231:8441/version
	I0428 16:31:52.429970    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:52.429970    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:52.429970    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:52.445743    4440 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0428 16:31:52.446683    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:52.446683    4440 round_trippers.go:580]     Audit-Id: a71f3cfe-e030-44d4-8bb4-24e9fd954f4d
	I0428 16:31:52.446683    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:52.446683    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:52.446683    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:52.446683    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:52.446683    4440 round_trippers.go:580]     Content-Length: 263
	I0428 16:31:52.446683    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:52 GMT
	I0428 16:31:52.446803    4440 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0428 16:31:52.447009    4440 api_server.go:141] control plane version: v1.30.0
	I0428 16:31:52.447009    4440 api_server.go:131] duration metric: took 5.0366695s to wait for apiserver health ...
	I0428 16:31:52.447009    4440 cni.go:84] Creating CNI manager for ""
	I0428 16:31:52.447009    4440 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0428 16:31:52.450049    4440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0428 16:31:52.465338    4440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0428 16:31:52.492670    4440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0428 16:31:52.538958    4440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0428 16:31:52.538958    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods
	I0428 16:31:52.538958    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:52.538958    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:52.538958    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:52.547752    4440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0428 16:31:52.547752    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:52.547752    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:52.547752    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:52 GMT
	I0428 16:31:52.547832    4440 round_trippers.go:580]     Audit-Id: 488c5e64-c64a-41e9-a0cc-d28d460427d7
	I0428 16:31:52.547832    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:52.548002    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:52.548002    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:52.549691    4440 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"596"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-w4tmj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"6373f137-e7ed-49b8-91bb-fb26c74db65e","resourceVersion":"549","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"9bf6718e-ba1f-4ba5-91ec-ec220bac317e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bf6718e-ba1f-4ba5-91ec-ec220bac317e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52292 chars]
	I0428 16:31:52.554592    4440 system_pods.go:59] 7 kube-system pods found
	I0428 16:31:52.555555    4440 system_pods.go:61] "coredns-7db6d8ff4d-w4tmj" [6373f137-e7ed-49b8-91bb-fb26c74db65e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0428 16:31:52.555555    4440 system_pods.go:61] "etcd-functional-285400" [c13e206d-870c-452b-9505-0ea9d8fda928] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0428 16:31:52.555555    4440 system_pods.go:61] "kube-apiserver-functional-285400" [324ce332-5282-441c-8e35-d056cd19c5d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0428 16:31:52.555555    4440 system_pods.go:61] "kube-controller-manager-functional-285400" [ba66ef53-0af6-4e90-8992-67b81a6352f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0428 16:31:52.555555    4440 system_pods.go:61] "kube-proxy-cmcmh" [d6b1cdcd-2edc-4615-bc60-36b8c54196f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0428 16:31:52.555555    4440 system_pods.go:61] "kube-scheduler-functional-285400" [9864aa7e-5dc8-4cbc-be11-dbd4ed9b1532] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0428 16:31:52.555555    4440 system_pods.go:61] "storage-provisioner" [eafdcc52-b6b9-490f-b7aa-1999dbc4a6a8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0428 16:31:52.555555    4440 system_pods.go:74] duration metric: took 16.5965ms to wait for pod list to return data ...
	I0428 16:31:52.555555    4440 node_conditions.go:102] verifying NodePressure condition ...
	I0428 16:31:52.555555    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes
	I0428 16:31:52.555555    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:52.555555    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:52.555555    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:52.559556    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:31:52.559638    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:52.559638    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:52.559638    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:52 GMT
	I0428 16:31:52.559638    4440 round_trippers.go:580]     Audit-Id: 0fe54ab4-e036-4824-97cf-15d8ad72d420
	I0428 16:31:52.559638    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:52.559638    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:52.559638    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:52.559873    4440 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"597"},"items":[{"metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0428 16:31:52.560515    4440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 16:31:52.560515    4440 node_conditions.go:123] node cpu capacity is 2
	I0428 16:31:52.560515    4440 node_conditions.go:105] duration metric: took 4.9607ms to run NodePressure ...
	I0428 16:31:52.560515    4440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0428 16:31:53.048611    4440 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0428 16:31:53.048611    4440 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0428 16:31:53.048611    4440 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0428 16:31:53.048611    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0428 16:31:53.048611    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:53.048611    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:53.048611    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:53.053006    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:31:53.053006    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:53.053006    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:53.053006    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:53.053006    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:53.053006    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:53.053006    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:53 GMT
	I0428 16:31:53.053006    4440 round_trippers.go:580]     Audit-Id: 38e888ab-6ab7-40c1-b02d-fdcd1d2f7399
	I0428 16:31:53.054415    4440 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"601"},"items":[{"metadata":{"name":"etcd-functional-285400","namespace":"kube-system","uid":"c13e206d-870c-452b-9505-0ea9d8fda928","resourceVersion":"561","creationTimestamp":"2024-04-28T23:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.228.231:2379","kubernetes.io/config.hash":"35dedd627fdfea3b9aff90de42393f4a","kubernetes.io/config.mirror":"35dedd627fdfea3b9aff90de42393f4a","kubernetes.io/config.seen":"2024-04-28T23:29:12.420375049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 31486 chars]
	I0428 16:31:53.056275    4440 kubeadm.go:733] kubelet initialised
	I0428 16:31:53.056275    4440 kubeadm.go:734] duration metric: took 7.6639ms waiting for restarted kubelet to initialise ...
	I0428 16:31:53.056275    4440 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 16:31:53.056275    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods
	I0428 16:31:53.056275    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:53.056275    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:53.056275    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:53.067484    4440 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0428 16:31:53.067484    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:53.067484    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:53.067484    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:53.067484    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:53 GMT
	I0428 16:31:53.067484    4440 round_trippers.go:580]     Audit-Id: 9d049bed-5bce-4099-be62-f48ce2d38f8b
	I0428 16:31:53.067484    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:53.067484    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:53.068821    4440 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"601"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-w4tmj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"6373f137-e7ed-49b8-91bb-fb26c74db65e","resourceVersion":"549","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"9bf6718e-ba1f-4ba5-91ec-ec220bac317e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bf6718e-ba1f-4ba5-91ec-ec220bac317e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52292 chars]
	I0428 16:31:53.071157    4440 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-w4tmj" in "kube-system" namespace to be "Ready" ...
	I0428 16:31:53.071763    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w4tmj
	I0428 16:31:53.071763    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:53.071763    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:53.071763    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:53.076390    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:31:53.076846    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:53.076846    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:53.076846    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:53.076846    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:53.076846    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:53 GMT
	I0428 16:31:53.076846    4440 round_trippers.go:580]     Audit-Id: 116a425d-5222-4a40-bd48-30bb606791e8
	I0428 16:31:53.076846    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:53.077081    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w4tmj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"6373f137-e7ed-49b8-91bb-fb26c74db65e","resourceVersion":"549","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"9bf6718e-ba1f-4ba5-91ec-ec220bac317e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bf6718e-ba1f-4ba5-91ec-ec220bac317e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6505 chars]
	I0428 16:31:53.077810    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:31:53.077898    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:53.077898    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:53.077898    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:53.087660    4440 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0428 16:31:53.087660    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:53.087660    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:53.087660    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:53 GMT
	I0428 16:31:53.087660    4440 round_trippers.go:580]     Audit-Id: 14d6e68f-72dd-4e56-a1e2-2d6b89adc311
	I0428 16:31:53.087660    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:53.087660    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:53.087660    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:53.087660    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:31:53.574259    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w4tmj
	I0428 16:31:53.574381    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:53.574381    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:53.574381    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:53.583743    4440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0428 16:31:53.583743    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:53.583743    4440 round_trippers.go:580]     Audit-Id: 9a999aea-4bf9-4534-ac83-7642dfc266fa
	I0428 16:31:53.583743    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:53.583743    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:53.583743    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:53.583743    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:53.583743    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:53 GMT
	I0428 16:31:53.584314    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w4tmj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"6373f137-e7ed-49b8-91bb-fb26c74db65e","resourceVersion":"549","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"9bf6718e-ba1f-4ba5-91ec-ec220bac317e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bf6718e-ba1f-4ba5-91ec-ec220bac317e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6505 chars]
	I0428 16:31:53.585945    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:31:53.586118    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:53.586118    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:53.586118    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:53.591003    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:31:53.591800    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:53.591800    4440 round_trippers.go:580]     Audit-Id: 752d2bab-5c8f-4b61-8daf-d71119164184
	I0428 16:31:53.591800    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:53.591800    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:53.591800    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:53.591800    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:53.591800    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:53 GMT
	I0428 16:31:53.592304    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:31:54.080181    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w4tmj
	I0428 16:31:54.080181    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:54.080181    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:54.080181    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:54.085746    4440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 16:31:54.086009    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:54.086009    4440 round_trippers.go:580]     Audit-Id: d0994abb-0c29-4e14-8d17-1b05f4b3e957
	I0428 16:31:54.086009    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:54.086009    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:54.086009    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:54.086009    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:54.086009    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:54 GMT
	I0428 16:31:54.086298    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w4tmj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"6373f137-e7ed-49b8-91bb-fb26c74db65e","resourceVersion":"549","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"9bf6718e-ba1f-4ba5-91ec-ec220bac317e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bf6718e-ba1f-4ba5-91ec-ec220bac317e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6505 chars]
	I0428 16:31:54.087124    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:31:54.087124    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:54.087124    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:54.087124    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:54.092713    4440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 16:31:54.092713    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:54.092713    4440 round_trippers.go:580]     Audit-Id: 0ccd375e-e1e2-4200-a9be-c7ac3946517c
	I0428 16:31:54.092713    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:54.093173    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:54.093173    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:54.093173    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:54.093173    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:54 GMT
	I0428 16:31:54.093308    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:31:54.579815    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w4tmj
	I0428 16:31:54.579815    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:54.579815    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:54.579815    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:54.583407    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:31:54.584425    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:54.584425    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:54.584474    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:54.584474    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:54.584474    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:54 GMT
	I0428 16:31:54.584474    4440 round_trippers.go:580]     Audit-Id: 50b26bb6-33e5-4711-a0cd-7c6985bd30e5
	I0428 16:31:54.584474    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:54.584611    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w4tmj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"6373f137-e7ed-49b8-91bb-fb26c74db65e","resourceVersion":"549","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"9bf6718e-ba1f-4ba5-91ec-ec220bac317e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bf6718e-ba1f-4ba5-91ec-ec220bac317e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6505 chars]
	I0428 16:31:54.585424    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:31:54.585424    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:54.585424    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:54.585424    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:54.587712    4440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 16:31:54.587712    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:54.588586    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:54 GMT
	I0428 16:31:54.588586    4440 round_trippers.go:580]     Audit-Id: 74ee660a-c4f1-4a14-9e68-1167ed9ac8c6
	I0428 16:31:54.588586    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:54.588586    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:54.588586    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:54.588586    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:54.588847    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:31:55.086618    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w4tmj
	I0428 16:31:55.086618    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:55.086618    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:55.086618    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:55.091240    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:31:55.091240    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:55.091452    4440 round_trippers.go:580]     Audit-Id: 79fc7b06-6c0e-45c3-8880-fb775827e9d0
	I0428 16:31:55.091452    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:55.091452    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:55.091452    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:55.091452    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:55.091452    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:55 GMT
	I0428 16:31:55.091651    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w4tmj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"6373f137-e7ed-49b8-91bb-fb26c74db65e","resourceVersion":"613","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"9bf6718e-ba1f-4ba5-91ec-ec220bac317e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bf6718e-ba1f-4ba5-91ec-ec220bac317e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0428 16:31:55.092648    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:31:55.092648    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:55.092648    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:55.092648    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:55.096237    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:31:55.096726    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:55.096726    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:55.096726    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:55.096726    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:55.096726    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:55.096726    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:55 GMT
	I0428 16:31:55.096726    4440 round_trippers.go:580]     Audit-Id: 9da5489a-48df-4f27-b05f-b91ea389c33d
	I0428 16:31:55.097320    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:31:55.097930    4440 pod_ready.go:102] pod "coredns-7db6d8ff4d-w4tmj" in "kube-system" namespace has status "Ready":"False"
	I0428 16:31:55.582625    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w4tmj
	I0428 16:31:55.582625    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:55.582953    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:55.582953    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:55.587255    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:31:55.587608    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:55.587608    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:55.587608    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:55.587608    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:55.587608    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:55.587608    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:55 GMT
	I0428 16:31:55.587608    4440 round_trippers.go:580]     Audit-Id: baff7545-1586-40ba-b171-5d23cb1723d8
	I0428 16:31:55.588097    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w4tmj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"6373f137-e7ed-49b8-91bb-fb26c74db65e","resourceVersion":"613","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"9bf6718e-ba1f-4ba5-91ec-ec220bac317e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bf6718e-ba1f-4ba5-91ec-ec220bac317e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0428 16:31:55.588461    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:31:55.588461    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:55.588461    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:55.588461    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:55.592199    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:31:55.592353    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:55.592353    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:55.592353    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:55.592353    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:55 GMT
	I0428 16:31:55.592353    4440 round_trippers.go:580]     Audit-Id: 454fed0e-e70c-443d-809a-7f79b3976189
	I0428 16:31:55.592353    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:55.592353    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:55.593103    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:31:56.073566    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w4tmj
	I0428 16:31:56.073638    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:56.073638    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:56.073701    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:56.078397    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:31:56.079405    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:56.079405    4440 round_trippers.go:580]     Audit-Id: ad9e270c-5ef7-47e1-bd39-351e3895f287
	I0428 16:31:56.079405    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:56.079405    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:56.079405    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:56.079405    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:56.079405    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:56 GMT
	I0428 16:31:56.079405    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w4tmj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"6373f137-e7ed-49b8-91bb-fb26c74db65e","resourceVersion":"613","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"9bf6718e-ba1f-4ba5-91ec-ec220bac317e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bf6718e-ba1f-4ba5-91ec-ec220bac317e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0428 16:31:56.079405    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:31:56.080445    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:56.080445    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:56.080445    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:56.083767    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:31:56.083767    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:56.083767    4440 round_trippers.go:580]     Audit-Id: 056fbef6-9211-475e-bd75-f68c389af92b
	I0428 16:31:56.083767    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:56.083767    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:56.083767    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:56.083767    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:56.084097    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:56 GMT
	I0428 16:31:56.084358    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:31:56.572462    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w4tmj
	I0428 16:31:56.572462    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:56.572540    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:56.572540    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:56.576844    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:31:56.577457    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:56.577457    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:56.577543    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:56.577543    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:56.577543    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:56.577543    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:56 GMT
	I0428 16:31:56.577543    4440 round_trippers.go:580]     Audit-Id: 7f10b86b-07ca-4deb-8da9-3630ed6fb876
	I0428 16:31:56.577841    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w4tmj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"6373f137-e7ed-49b8-91bb-fb26c74db65e","resourceVersion":"613","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"9bf6718e-ba1f-4ba5-91ec-ec220bac317e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bf6718e-ba1f-4ba5-91ec-ec220bac317e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0428 16:31:56.578735    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:31:56.578786    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:56.578786    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:56.578786    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:56.581380    4440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 16:31:56.581380    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:56.581380    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:56.581380    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:56.581380    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:56.581380    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:56.581380    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:56 GMT
	I0428 16:31:56.581692    4440 round_trippers.go:580]     Audit-Id: 246171f0-d250-4f4b-a82f-141be3b163ff
	I0428 16:31:56.582048    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:31:57.087295    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w4tmj
	I0428 16:31:57.087295    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:57.087295    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:57.087295    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:57.093891    4440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 16:31:57.093891    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:57.093891    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:57.093891    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:57.093891    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:57 GMT
	I0428 16:31:57.093891    4440 round_trippers.go:580]     Audit-Id: a0e4c42b-4b4e-45a6-a366-2e804374d3b8
	I0428 16:31:57.093891    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:57.093891    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:57.094805    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w4tmj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"6373f137-e7ed-49b8-91bb-fb26c74db65e","resourceVersion":"613","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"9bf6718e-ba1f-4ba5-91ec-ec220bac317e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bf6718e-ba1f-4ba5-91ec-ec220bac317e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0428 16:31:57.095636    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:31:57.095694    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:57.095694    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:57.095694    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:57.097908    4440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 16:31:57.097908    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:57.097908    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:57.097908    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:57.097908    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:57 GMT
	I0428 16:31:57.097908    4440 round_trippers.go:580]     Audit-Id: f319d3a7-585c-4ab7-ba46-2588acd55c88
	I0428 16:31:57.097908    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:57.097908    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:57.099201    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:31:57.099495    4440 pod_ready.go:102] pod "coredns-7db6d8ff4d-w4tmj" in "kube-system" namespace has status "Ready":"False"
	I0428 16:31:57.571858    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w4tmj
	I0428 16:31:57.572087    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:57.572087    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:57.572087    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:57.577497    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:31:57.577544    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:57.577544    4440 round_trippers.go:580]     Audit-Id: d0fc92e5-88d5-4639-9273-d94a30aa711a
	I0428 16:31:57.577626    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:57.577626    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:57.577654    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:57.577654    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:57.577654    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:57 GMT
	I0428 16:31:57.577756    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w4tmj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"6373f137-e7ed-49b8-91bb-fb26c74db65e","resourceVersion":"615","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"9bf6718e-ba1f-4ba5-91ec-ec220bac317e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bf6718e-ba1f-4ba5-91ec-ec220bac317e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0428 16:31:57.578687    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:31:57.578768    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:57.578768    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:57.578768    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:57.580988    4440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 16:31:57.581642    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:57.581642    4440 round_trippers.go:580]     Audit-Id: 44b7ecac-bf69-416a-9982-2d4397956c04
	I0428 16:31:57.581642    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:57.581642    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:57.581642    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:57.581642    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:57.581642    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:57 GMT
	I0428 16:31:57.582004    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:31:57.582099    4440 pod_ready.go:92] pod "coredns-7db6d8ff4d-w4tmj" in "kube-system" namespace has status "Ready":"True"
	I0428 16:31:57.582099    4440 pod_ready.go:81] duration metric: took 4.5109352s for pod "coredns-7db6d8ff4d-w4tmj" in "kube-system" namespace to be "Ready" ...
	I0428 16:31:57.582099    4440 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-285400" in "kube-system" namespace to be "Ready" ...
	I0428 16:31:57.582099    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/etcd-functional-285400
	I0428 16:31:57.582099    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:57.582099    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:57.582099    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:57.584826    4440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 16:31:57.584826    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:57.585189    4440 round_trippers.go:580]     Audit-Id: 1753ecef-e351-4dcd-af18-8f9e8dcd9ed8
	I0428 16:31:57.585189    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:57.585264    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:57.585264    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:57.585264    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:57.585294    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:57 GMT
	I0428 16:31:57.585633    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-285400","namespace":"kube-system","uid":"c13e206d-870c-452b-9505-0ea9d8fda928","resourceVersion":"561","creationTimestamp":"2024-04-28T23:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.228.231:2379","kubernetes.io/config.hash":"35dedd627fdfea3b9aff90de42393f4a","kubernetes.io/config.mirror":"35dedd627fdfea3b9aff90de42393f4a","kubernetes.io/config.seen":"2024-04-28T23:29:12.420375049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6604 chars]
	I0428 16:31:57.585865    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:31:57.585865    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:57.585865    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:57.585865    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:57.588682    4440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 16:31:57.588682    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:57.588682    4440 round_trippers.go:580]     Audit-Id: 901ca1b7-ce4f-401d-9881-377776e8b25a
	I0428 16:31:57.588682    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:57.589148    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:57.589148    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:57.589148    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:57.589148    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:57 GMT
	I0428 16:31:57.589338    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:31:58.086547    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/etcd-functional-285400
	I0428 16:31:58.086618    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:58.086618    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:58.086618    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:58.090236    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:31:58.090577    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:58.090577    4440 round_trippers.go:580]     Audit-Id: 065d2075-a0c9-40d4-a624-f590f51d0342
	I0428 16:31:58.090577    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:58.090717    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:58.090717    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:58.090717    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:58.090717    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:58 GMT
	I0428 16:31:58.090937    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-285400","namespace":"kube-system","uid":"c13e206d-870c-452b-9505-0ea9d8fda928","resourceVersion":"619","creationTimestamp":"2024-04-28T23:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.228.231:2379","kubernetes.io/config.hash":"35dedd627fdfea3b9aff90de42393f4a","kubernetes.io/config.mirror":"35dedd627fdfea3b9aff90de42393f4a","kubernetes.io/config.seen":"2024-04-28T23:29:12.420375049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6380 chars]
	I0428 16:31:58.091207    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:31:58.091207    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:58.091207    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:58.091207    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:58.095230    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:31:58.095230    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:58.095230    4440 round_trippers.go:580]     Audit-Id: fed3b541-6e3a-496c-bb5e-b30964bea28e
	I0428 16:31:58.095230    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:58.095230    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:58.095230    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:58.095230    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:58.095230    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:58 GMT
	I0428 16:31:58.095817    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:31:58.096193    4440 pod_ready.go:92] pod "etcd-functional-285400" in "kube-system" namespace has status "Ready":"True"
	I0428 16:31:58.096193    4440 pod_ready.go:81] duration metric: took 514.0935ms for pod "etcd-functional-285400" in "kube-system" namespace to be "Ready" ...
	I0428 16:31:58.096193    4440 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-285400" in "kube-system" namespace to be "Ready" ...
	I0428 16:31:58.096193    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-285400
	I0428 16:31:58.096193    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:58.096193    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:58.096193    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:58.098789    4440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 16:31:58.098789    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:58.098789    4440 round_trippers.go:580]     Audit-Id: 593852e6-0434-40f1-8553-4b1c98fe8e69
	I0428 16:31:58.099670    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:58.099670    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:58.099670    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:58.099670    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:58.099670    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:58 GMT
	I0428 16:31:58.100133    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-285400","namespace":"kube-system","uid":"324ce332-5282-441c-8e35-d056cd19c5d5","resourceVersion":"555","creationTimestamp":"2024-04-28T23:29:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.228.231:8441","kubernetes.io/config.hash":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.mirror":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.seen":"2024-04-28T23:29:12.420385849Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8408 chars]
	I0428 16:31:58.100843    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:31:58.100904    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:58.100904    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:58.100904    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:58.103759    4440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 16:31:58.103759    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:58.103759    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:58.103759    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:58.103759    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:58.103759    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:58 GMT
	I0428 16:31:58.103759    4440 round_trippers.go:580]     Audit-Id: 26074bcd-c60c-4181-ab91-1a6b93c6cbbc
	I0428 16:31:58.103759    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:58.103759    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:31:58.599471    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-285400
	I0428 16:31:58.599739    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:58.599739    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:58.599739    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:58.604183    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:31:58.604244    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:58.604244    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:58.604244    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:58.604244    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:58.604244    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:58 GMT
	I0428 16:31:58.604244    4440 round_trippers.go:580]     Audit-Id: b036d087-1af2-4154-8a31-75f4df65ba21
	I0428 16:31:58.604244    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:58.606804    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-285400","namespace":"kube-system","uid":"324ce332-5282-441c-8e35-d056cd19c5d5","resourceVersion":"555","creationTimestamp":"2024-04-28T23:29:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.228.231:8441","kubernetes.io/config.hash":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.mirror":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.seen":"2024-04-28T23:29:12.420385849Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8408 chars]
	I0428 16:31:58.607661    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:31:58.607661    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:58.607661    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:58.607661    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:58.612509    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:31:58.612509    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:58.612509    4440 round_trippers.go:580]     Audit-Id: 5953729d-26b2-41f8-b64d-85ee7129f887
	I0428 16:31:58.612509    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:58.612509    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:58.612509    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:58.612509    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:58.612509    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:58 GMT
	I0428 16:31:58.612509    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:31:59.111391    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-285400
	I0428 16:31:59.111391    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:59.111391    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:59.111391    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:59.115132    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:31:59.116203    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:59.116203    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:59.116253    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:59 GMT
	I0428 16:31:59.116253    4440 round_trippers.go:580]     Audit-Id: 3b832c5e-be06-424c-9dee-60b9635247e8
	I0428 16:31:59.116291    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:59.116291    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:59.116291    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:59.116852    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-285400","namespace":"kube-system","uid":"324ce332-5282-441c-8e35-d056cd19c5d5","resourceVersion":"555","creationTimestamp":"2024-04-28T23:29:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.228.231:8441","kubernetes.io/config.hash":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.mirror":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.seen":"2024-04-28T23:29:12.420385849Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8408 chars]
	I0428 16:31:59.117800    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:31:59.117853    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:59.117853    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:59.117853    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:59.121439    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:31:59.121439    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:59.121439    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:59.121439    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:59.121439    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:59.121439    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:59.121439    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:59 GMT
	I0428 16:31:59.121439    4440 round_trippers.go:580]     Audit-Id: 68190de5-9487-495f-b8ba-4934a8e39a6b
	I0428 16:31:59.121439    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:31:59.611087    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-285400
	I0428 16:31:59.611189    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:59.611189    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:59.611189    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:59.615047    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:31:59.615511    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:59.615511    4440 round_trippers.go:580]     Audit-Id: c27085ac-d3ed-4047-94be-6eafbb820726
	I0428 16:31:59.615511    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:59.615511    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:59.615511    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:59.615511    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:59.615583    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:59 GMT
	I0428 16:31:59.616082    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-285400","namespace":"kube-system","uid":"324ce332-5282-441c-8e35-d056cd19c5d5","resourceVersion":"555","creationTimestamp":"2024-04-28T23:29:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.228.231:8441","kubernetes.io/config.hash":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.mirror":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.seen":"2024-04-28T23:29:12.420385849Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8408 chars]
	I0428 16:31:59.616548    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:31:59.616548    4440 round_trippers.go:469] Request Headers:
	I0428 16:31:59.617106    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:31:59.617106    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:31:59.620269    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:31:59.620269    4440 round_trippers.go:577] Response Headers:
	I0428 16:31:59.620269    4440 round_trippers.go:580]     Audit-Id: fde63d6a-26ac-43ee-a97d-c2798c7fae11
	I0428 16:31:59.620269    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:31:59.620269    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:31:59.620269    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:31:59.620269    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:31:59.620484    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:31:59 GMT
	I0428 16:31:59.620533    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:00.111382    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-285400
	I0428 16:32:00.111382    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:00.111449    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:00.111449    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:00.116263    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:32:00.116263    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:00.116402    4440 round_trippers.go:580]     Audit-Id: d2c5808c-4dd9-4518-b1f1-5fd2f7f0af97
	I0428 16:32:00.116402    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:00.116402    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:00.116402    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:00.116402    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:00.116402    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:00 GMT
	I0428 16:32:00.116765    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-285400","namespace":"kube-system","uid":"324ce332-5282-441c-8e35-d056cd19c5d5","resourceVersion":"555","creationTimestamp":"2024-04-28T23:29:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.228.231:8441","kubernetes.io/config.hash":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.mirror":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.seen":"2024-04-28T23:29:12.420385849Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8408 chars]
	I0428 16:32:00.117915    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:00.117915    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:00.118008    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:00.118008    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:00.121337    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:00.121337    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:00.121836    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:00.121836    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:00 GMT
	I0428 16:32:00.121836    4440 round_trippers.go:580]     Audit-Id: 9356d41b-ad06-4d9b-a014-02d758f80f4f
	I0428 16:32:00.121836    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:00.121836    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:00.121836    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:00.122125    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:00.122592    4440 pod_ready.go:102] pod "kube-apiserver-functional-285400" in "kube-system" namespace has status "Ready":"False"
	I0428 16:32:00.608905    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-285400
	I0428 16:32:00.609181    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:00.609181    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:00.609181    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:00.613175    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:00.613512    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:00.613512    4440 round_trippers.go:580]     Audit-Id: 96447ac1-7065-4c7a-a368-4d057324db13
	I0428 16:32:00.613512    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:00.613512    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:00.613512    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:00.613512    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:00.613614    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:00 GMT
	I0428 16:32:00.613720    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-285400","namespace":"kube-system","uid":"324ce332-5282-441c-8e35-d056cd19c5d5","resourceVersion":"555","creationTimestamp":"2024-04-28T23:29:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.228.231:8441","kubernetes.io/config.hash":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.mirror":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.seen":"2024-04-28T23:29:12.420385849Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8408 chars]
	I0428 16:32:00.614635    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:00.614635    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:00.614635    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:00.614635    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:00.620013    4440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 16:32:00.620013    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:00.620013    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:00.620013    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:00 GMT
	I0428 16:32:00.620013    4440 round_trippers.go:580]     Audit-Id: 1bb31913-319c-4217-8075-a647f1bc63bb
	I0428 16:32:00.620013    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:00.620013    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:00.620013    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:00.620013    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:01.107302    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-285400
	I0428 16:32:01.107538    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:01.107538    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:01.107538    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:01.114899    4440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 16:32:01.114899    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:01.114899    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:01.114899    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:01.114899    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:01 GMT
	I0428 16:32:01.114899    4440 round_trippers.go:580]     Audit-Id: 45c34250-a707-483e-8b43-77241f5ac060
	I0428 16:32:01.114899    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:01.114899    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:01.115338    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-285400","namespace":"kube-system","uid":"324ce332-5282-441c-8e35-d056cd19c5d5","resourceVersion":"555","creationTimestamp":"2024-04-28T23:29:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.228.231:8441","kubernetes.io/config.hash":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.mirror":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.seen":"2024-04-28T23:29:12.420385849Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8408 chars]
	I0428 16:32:01.116384    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:01.116384    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:01.116459    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:01.116459    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:01.119076    4440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 16:32:01.119076    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:01.119994    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:01.119994    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:01 GMT
	I0428 16:32:01.119994    4440 round_trippers.go:580]     Audit-Id: 575be437-f7b2-4994-b543-0aa64523e651
	I0428 16:32:01.119994    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:01.119994    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:01.120055    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:01.120157    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:01.609216    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-285400
	I0428 16:32:01.609216    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:01.609216    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:01.609216    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:01.615201    4440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 16:32:01.615201    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:01.615201    4440 round_trippers.go:580]     Audit-Id: c90df4e4-92c2-4db0-84cf-6062edf2c236
	I0428 16:32:01.615201    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:01.615201    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:01.615201    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:01.615201    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:01.615201    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:01 GMT
	I0428 16:32:01.616033    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-285400","namespace":"kube-system","uid":"324ce332-5282-441c-8e35-d056cd19c5d5","resourceVersion":"555","creationTimestamp":"2024-04-28T23:29:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.228.231:8441","kubernetes.io/config.hash":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.mirror":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.seen":"2024-04-28T23:29:12.420385849Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8408 chars]
	I0428 16:32:01.616758    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:01.616758    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:01.616758    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:01.616758    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:01.619343    4440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 16:32:01.619671    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:01.619671    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:01 GMT
	I0428 16:32:01.619671    4440 round_trippers.go:580]     Audit-Id: f271a4ed-eaf1-4e5d-87e5-500ef7cc4b57
	I0428 16:32:01.619671    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:01.619671    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:01.619671    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:01.619671    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:01.619974    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:02.105950    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-285400
	I0428 16:32:02.106165    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:02.106165    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:02.106165    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:02.109723    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:02.110515    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:02.110515    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:02.110515    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:02 GMT
	I0428 16:32:02.110515    4440 round_trippers.go:580]     Audit-Id: 4d1ccfbc-1b14-46d4-9409-c5a7a20d4f37
	I0428 16:32:02.110515    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:02.110515    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:02.110515    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:02.110951    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-285400","namespace":"kube-system","uid":"324ce332-5282-441c-8e35-d056cd19c5d5","resourceVersion":"623","creationTimestamp":"2024-04-28T23:29:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.228.231:8441","kubernetes.io/config.hash":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.mirror":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.seen":"2024-04-28T23:29:12.420385849Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8164 chars]
	I0428 16:32:02.111401    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:02.111401    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:02.111401    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:02.111401    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:02.114602    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:02.114602    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:02.114602    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:02.114602    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:02 GMT
	I0428 16:32:02.114602    4440 round_trippers.go:580]     Audit-Id: 7b9e21bc-f9ff-4b2e-934d-a6668d88f01c
	I0428 16:32:02.114602    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:02.114793    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:02.114793    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:02.114963    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:02.115446    4440 pod_ready.go:92] pod "kube-apiserver-functional-285400" in "kube-system" namespace has status "Ready":"True"
	I0428 16:32:02.115446    4440 pod_ready.go:81] duration metric: took 4.019248s for pod "kube-apiserver-functional-285400" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:02.115446    4440 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-285400" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:02.116011    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-285400
	I0428 16:32:02.116011    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:02.116157    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:02.116157    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:02.120046    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:02.120129    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:02.120129    4440 round_trippers.go:580]     Audit-Id: 6e3edd17-ca5f-4f3d-8108-fe1eaceb5581
	I0428 16:32:02.120165    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:02.120191    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:02.120191    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:02.120191    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:02.120191    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:02 GMT
	I0428 16:32:02.120498    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-285400","namespace":"kube-system","uid":"ba66ef53-0af6-4e90-8992-67b81a6352f3","resourceVersion":"618","creationTimestamp":"2024-04-28T23:29:12Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b58dd1d0fd407dad600e27e9ada9e50d","kubernetes.io/config.mirror":"b58dd1d0fd407dad600e27e9ada9e50d","kubernetes.io/config.seen":"2024-04-28T23:29:12.420369448Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7729 chars]
	I0428 16:32:02.121119    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:02.121239    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:02.121239    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:02.121239    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:02.124200    4440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 16:32:02.124200    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:02.124200    4440 round_trippers.go:580]     Audit-Id: 9ee738f2-5525-407c-907e-315484320887
	I0428 16:32:02.124264    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:02.124264    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:02.124264    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:02.124264    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:02.124264    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:02 GMT
	I0428 16:32:02.124451    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:02.124770    4440 pod_ready.go:92] pod "kube-controller-manager-functional-285400" in "kube-system" namespace has status "Ready":"True"
	I0428 16:32:02.124770    4440 pod_ready.go:81] duration metric: took 9.3236ms for pod "kube-controller-manager-functional-285400" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:02.124770    4440 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cmcmh" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:02.124770    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-proxy-cmcmh
	I0428 16:32:02.124770    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:02.124770    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:02.124770    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:02.128086    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:02.128155    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:02.128155    4440 round_trippers.go:580]     Audit-Id: 0b58d670-5c22-4768-b11d-5ddc750cd132
	I0428 16:32:02.128155    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:02.128155    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:02.128155    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:02.128238    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:02.128238    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:02 GMT
	I0428 16:32:02.128571    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cmcmh","generateName":"kube-proxy-","namespace":"kube-system","uid":"d6b1cdcd-2edc-4615-bc60-36b8c54196f3","resourceVersion":"614","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a7aad6d8-1e92-43d4-8eba-edbaa82c04c3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a7aad6d8-1e92-43d4-8eba-edbaa82c04c3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6266 chars]
	I0428 16:32:02.129037    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:02.129037    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:02.129037    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:02.129037    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:02.134256    4440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 16:32:02.134352    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:02.134352    4440 round_trippers.go:580]     Audit-Id: 0a629e64-e51f-4f0e-b482-5f44b43b94eb
	I0428 16:32:02.134352    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:02.134352    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:02.134352    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:02.134352    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:02.134468    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:02 GMT
	I0428 16:32:02.134751    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:02.135812    4440 pod_ready.go:92] pod "kube-proxy-cmcmh" in "kube-system" namespace has status "Ready":"True"
	I0428 16:32:02.135812    4440 pod_ready.go:81] duration metric: took 11.042ms for pod "kube-proxy-cmcmh" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:02.135812    4440 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-285400" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:02.136073    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-285400
	I0428 16:32:02.136122    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:02.136146    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:02.136146    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:02.152702    4440 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0428 16:32:02.152937    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:02.152937    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:02.152937    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:02.153003    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:02 GMT
	I0428 16:32:02.153003    4440 round_trippers.go:580]     Audit-Id: e5f23cd0-2276-4407-b147-86b8fcc300b6
	I0428 16:32:02.153003    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:02.153003    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:02.153218    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-285400","namespace":"kube-system","uid":"9864aa7e-5dc8-4cbc-be11-dbd4ed9b1532","resourceVersion":"541","creationTimestamp":"2024-04-28T23:29:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.mirror":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.seen":"2024-04-28T23:29:04.446839393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5451 chars]
	I0428 16:32:02.153611    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:02.153611    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:02.153611    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:02.153611    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:02.156260    4440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 16:32:02.156260    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:02.156260    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:02.156260    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:02.156260    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:02.156260    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:02 GMT
	I0428 16:32:02.156260    4440 round_trippers.go:580]     Audit-Id: f7cb03c4-6338-4c8e-81f3-955560f14b6e
	I0428 16:32:02.156260    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:02.156812    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:02.649889    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-285400
	I0428 16:32:02.649889    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:02.649889    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:02.649889    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:02.653485    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:02.654405    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:02.654405    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:02.654405    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:02.654405    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:02.654405    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:02 GMT
	I0428 16:32:02.654405    4440 round_trippers.go:580]     Audit-Id: 66d57ecb-6198-49ae-b240-64d008e075c9
	I0428 16:32:02.654405    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:02.654701    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-285400","namespace":"kube-system","uid":"9864aa7e-5dc8-4cbc-be11-dbd4ed9b1532","resourceVersion":"541","creationTimestamp":"2024-04-28T23:29:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.mirror":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.seen":"2024-04-28T23:29:04.446839393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5451 chars]
	I0428 16:32:02.655292    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:02.655367    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:02.655367    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:02.655367    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:02.658489    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:02.658489    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:02.658489    4440 round_trippers.go:580]     Audit-Id: f605eb76-a85d-4089-ba2b-085282206cbd
	I0428 16:32:02.658489    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:02.658489    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:02.658489    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:02.658489    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:02.658628    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:02 GMT
	I0428 16:32:02.658818    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:03.138219    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-285400
	I0428 16:32:03.138360    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:03.138360    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:03.138360    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:03.142455    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:32:03.142455    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:03.143304    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:03 GMT
	I0428 16:32:03.143304    4440 round_trippers.go:580]     Audit-Id: 87b3f052-06aa-4408-8a4f-2fc526d8397f
	I0428 16:32:03.143304    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:03.143304    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:03.143304    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:03.143304    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:03.143727    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-285400","namespace":"kube-system","uid":"9864aa7e-5dc8-4cbc-be11-dbd4ed9b1532","resourceVersion":"541","creationTimestamp":"2024-04-28T23:29:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.mirror":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.seen":"2024-04-28T23:29:04.446839393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5451 chars]
	I0428 16:32:03.144463    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:03.144463    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:03.144463    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:03.144533    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:03.147460    4440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 16:32:03.147460    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:03.147460    4440 round_trippers.go:580]     Audit-Id: f0494e85-2526-4a74-9e21-8c608bd21ce5
	I0428 16:32:03.147460    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:03.147460    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:03.147460    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:03.147460    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:03.147460    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:03 GMT
	I0428 16:32:03.147460    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:03.650607    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-285400
	I0428 16:32:03.650607    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:03.650607    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:03.650607    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:03.654265    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:03.654717    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:03.654717    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:03.654717    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:03.654717    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:03 GMT
	I0428 16:32:03.654801    4440 round_trippers.go:580]     Audit-Id: 18886980-0a30-434f-9c44-6e6f990944e4
	I0428 16:32:03.654801    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:03.654801    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:03.654973    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-285400","namespace":"kube-system","uid":"9864aa7e-5dc8-4cbc-be11-dbd4ed9b1532","resourceVersion":"541","creationTimestamp":"2024-04-28T23:29:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.mirror":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.seen":"2024-04-28T23:29:04.446839393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5451 chars]
	I0428 16:32:03.655756    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:03.655826    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:03.655826    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:03.655826    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:03.659023    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:03.659140    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:03.659140    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:03 GMT
	I0428 16:32:03.659140    4440 round_trippers.go:580]     Audit-Id: 01a82762-fbac-418c-96d6-684fc631604d
	I0428 16:32:03.659140    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:03.659249    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:03.659249    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:03.659249    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:03.661041    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:04.137360    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-285400
	I0428 16:32:04.137360    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:04.137360    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:04.137360    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:04.140971    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:04.141090    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:04.141090    4440 round_trippers.go:580]     Audit-Id: 07f54047-c054-47e5-a97f-ee6d6846cfb1
	I0428 16:32:04.141090    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:04.141177    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:04.141177    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:04.141177    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:04.141177    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:04 GMT
	I0428 16:32:04.141242    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-285400","namespace":"kube-system","uid":"9864aa7e-5dc8-4cbc-be11-dbd4ed9b1532","resourceVersion":"541","creationTimestamp":"2024-04-28T23:29:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.mirror":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.seen":"2024-04-28T23:29:04.446839393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5451 chars]
	I0428 16:32:04.141873    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:04.141873    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:04.141873    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:04.141873    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:04.145839    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:04.145933    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:04.145933    4440 round_trippers.go:580]     Audit-Id: 716f89e5-900f-4c44-ae0e-50d1b2582823
	I0428 16:32:04.145933    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:04.145970    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:04.145970    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:04.145970    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:04.145970    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:04 GMT
	I0428 16:32:04.146101    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:04.146893    4440 pod_ready.go:102] pod "kube-scheduler-functional-285400" in "kube-system" namespace has status "Ready":"False"
	I0428 16:32:04.639779    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-285400
	I0428 16:32:04.639779    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:04.639779    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:04.639779    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:04.643498    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:04.643498    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:04.643498    4440 round_trippers.go:580]     Audit-Id: 67110b33-8a45-4529-8d6e-9caf7d76738a
	I0428 16:32:04.643498    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:04.643498    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:04.643678    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:04.643678    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:04.643678    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:04 GMT
	I0428 16:32:04.643905    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-285400","namespace":"kube-system","uid":"9864aa7e-5dc8-4cbc-be11-dbd4ed9b1532","resourceVersion":"541","creationTimestamp":"2024-04-28T23:29:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.mirror":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.seen":"2024-04-28T23:29:04.446839393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5451 chars]
	I0428 16:32:04.645807    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:04.645807    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:04.645933    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:04.645933    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:04.649177    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:04.649177    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:04.649177    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:04 GMT
	I0428 16:32:04.649399    4440 round_trippers.go:580]     Audit-Id: a9a7bb4f-9b84-41f3-862b-c08129124976
	I0428 16:32:04.649399    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:04.649399    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:04.649399    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:04.649486    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:04.649586    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:05.139461    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-285400
	I0428 16:32:05.139461    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:05.139461    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:05.139461    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:05.143068    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:05.143785    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:05.143785    4440 round_trippers.go:580]     Audit-Id: 8d1b9283-c52c-414e-8d2a-14d67f4bcccd
	I0428 16:32:05.143785    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:05.143785    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:05.143785    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:05.143785    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:05.143785    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:05 GMT
	I0428 16:32:05.143785    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-285400","namespace":"kube-system","uid":"9864aa7e-5dc8-4cbc-be11-dbd4ed9b1532","resourceVersion":"541","creationTimestamp":"2024-04-28T23:29:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.mirror":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.seen":"2024-04-28T23:29:04.446839393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5451 chars]
	I0428 16:32:05.144796    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:05.144796    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:05.144796    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:05.144796    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:05.149761    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:32:05.149761    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:05.149761    4440 round_trippers.go:580]     Audit-Id: 176aa728-cd43-44e6-b90d-67d2a3c35d84
	I0428 16:32:05.149761    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:05.149761    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:05.149761    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:05.149761    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:05.149761    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:05 GMT
	I0428 16:32:05.149761    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:05.638423    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-285400
	I0428 16:32:05.638423    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:05.638756    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:05.638756    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:05.641510    4440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 16:32:05.642487    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:05.642487    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:05.642487    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:05.642487    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:05.642487    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:05.642487    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:05 GMT
	I0428 16:32:05.642487    4440 round_trippers.go:580]     Audit-Id: 1d453924-4bf8-4c07-9a73-d201fa459070
	I0428 16:32:05.642657    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-285400","namespace":"kube-system","uid":"9864aa7e-5dc8-4cbc-be11-dbd4ed9b1532","resourceVersion":"541","creationTimestamp":"2024-04-28T23:29:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.mirror":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.seen":"2024-04-28T23:29:04.446839393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5451 chars]
	I0428 16:32:05.643432    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:05.643505    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:05.643505    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:05.643594    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:05.646307    4440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 16:32:05.646965    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:05.646965    4440 round_trippers.go:580]     Audit-Id: 09339377-b71c-4ae7-9520-9f171267cb8f
	I0428 16:32:05.646965    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:05.646965    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:05.646965    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:05.646965    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:05.646965    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:05 GMT
	I0428 16:32:05.647233    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:06.136912    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-285400
	I0428 16:32:06.137018    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:06.137018    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:06.137018    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:06.143435    4440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 16:32:06.143435    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:06.143435    4440 round_trippers.go:580]     Audit-Id: 0b67ba1e-376c-43d4-b223-b48d67c2ec90
	I0428 16:32:06.143435    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:06.143435    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:06.143704    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:06.143704    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:06.143704    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:06 GMT
	I0428 16:32:06.143809    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-285400","namespace":"kube-system","uid":"9864aa7e-5dc8-4cbc-be11-dbd4ed9b1532","resourceVersion":"541","creationTimestamp":"2024-04-28T23:29:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.mirror":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.seen":"2024-04-28T23:29:04.446839393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5451 chars]
	I0428 16:32:06.145025    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:06.145025    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:06.145143    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:06.145143    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:06.150435    4440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 16:32:06.150435    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:06.150435    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:06 GMT
	I0428 16:32:06.150435    4440 round_trippers.go:580]     Audit-Id: e5729a98-ed70-4b7f-be30-1a49f3ecdb29
	I0428 16:32:06.150435    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:06.150435    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:06.150435    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:06.150435    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:06.151016    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:06.151940    4440 pod_ready.go:102] pod "kube-scheduler-functional-285400" in "kube-system" namespace has status "Ready":"False"
	I0428 16:32:06.639260    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-285400
	I0428 16:32:06.639260    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:06.639260    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:06.639260    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:06.647413    4440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0428 16:32:06.647413    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:06.647498    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:06.647498    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:06 GMT
	I0428 16:32:06.647498    4440 round_trippers.go:580]     Audit-Id: f761a132-0540-4821-9b40-5458e4f7df13
	I0428 16:32:06.647498    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:06.647498    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:06.647498    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:06.647750    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-285400","namespace":"kube-system","uid":"9864aa7e-5dc8-4cbc-be11-dbd4ed9b1532","resourceVersion":"631","creationTimestamp":"2024-04-28T23:29:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.mirror":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.seen":"2024-04-28T23:29:04.446839393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5207 chars]
	I0428 16:32:06.648372    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:06.648505    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:06.648505    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:06.648505    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:06.651109    4440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 16:32:06.651109    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:06.651109    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:06.651109    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:06.651109    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:06.651109    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:06 GMT
	I0428 16:32:06.651109    4440 round_trippers.go:580]     Audit-Id: 32a1fad2-198c-46d7-9582-124499dd72ab
	I0428 16:32:06.651109    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:06.651109    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:06.651109    4440 pod_ready.go:92] pod "kube-scheduler-functional-285400" in "kube-system" namespace has status "Ready":"True"
	I0428 16:32:06.651109    4440 pod_ready.go:81] duration metric: took 4.5152911s for pod "kube-scheduler-functional-285400" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:06.651109    4440 pod_ready.go:38] duration metric: took 13.5948152s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 16:32:06.651109    4440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 16:32:06.671097    4440 command_runner.go:130] > -16
	I0428 16:32:06.671903    4440 ops.go:34] apiserver oom_adj: -16
	I0428 16:32:06.671903    4440 kubeadm.go:591] duration metric: took 24.3254487s to restartPrimaryControlPlane
	I0428 16:32:06.671903    4440 kubeadm.go:393] duration metric: took 24.4010198s to StartCluster
	I0428 16:32:06.672062    4440 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:32:06.672270    4440 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 16:32:06.673755    4440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:32:06.675452    4440 start.go:234] Will wait 6m0s for node &{Name: IP:172.27.228.231 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 16:32:06.675532    4440 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0428 16:32:06.680562    4440 out.go:177] * Verifying Kubernetes components...
	I0428 16:32:06.675638    4440 addons.go:69] Setting storage-provisioner=true in profile "functional-285400"
	I0428 16:32:06.675638    4440 addons.go:69] Setting default-storageclass=true in profile "functional-285400"
	I0428 16:32:06.675777    4440 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 16:32:06.680875    4440 addons.go:234] Setting addon storage-provisioner=true in "functional-285400"
	W0428 16:32:06.680875    4440 addons.go:243] addon storage-provisioner should already be in state true
	I0428 16:32:06.680875    4440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-285400"
	I0428 16:32:06.684792    4440 host.go:66] Checking if "functional-285400" exists ...
	I0428 16:32:06.684792    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:32:06.686005    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:32:06.699563    4440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 16:32:07.015293    4440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 16:32:07.044697    4440 node_ready.go:35] waiting up to 6m0s for node "functional-285400" to be "Ready" ...
	I0428 16:32:07.044907    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:07.044907    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:07.044907    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:07.044907    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:07.050125    4440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 16:32:07.050125    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:07.050125    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:07.050125    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:07 GMT
	I0428 16:32:07.050125    4440 round_trippers.go:580]     Audit-Id: 9638d118-2995-4747-a28d-74291a255b36
	I0428 16:32:07.050125    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:07.050208    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:07.050208    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:07.050558    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:07.051036    4440 node_ready.go:49] node "functional-285400" has status "Ready":"True"
	I0428 16:32:07.051036    4440 node_ready.go:38] duration metric: took 6.3395ms for node "functional-285400" to be "Ready" ...
	I0428 16:32:07.051036    4440 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 16:32:07.051036    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods
	I0428 16:32:07.051036    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:07.051036    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:07.051036    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:07.057657    4440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 16:32:07.057839    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:07.057986    4440 round_trippers.go:580]     Audit-Id: e6810f91-526a-4ee2-88df-380ba9dfd160
	I0428 16:32:07.058036    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:07.058036    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:07.058036    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:07.058036    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:07.058036    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:07 GMT
	I0428 16:32:07.059861    4440 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"631"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-w4tmj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"6373f137-e7ed-49b8-91bb-fb26c74db65e","resourceVersion":"615","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"9bf6718e-ba1f-4ba5-91ec-ec220bac317e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bf6718e-ba1f-4ba5-91ec-ec220bac317e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50865 chars]
	I0428 16:32:07.066548    4440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-w4tmj" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:07.066907    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w4tmj
	I0428 16:32:07.066944    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:07.066944    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:07.067045    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:07.072623    4440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 16:32:07.072623    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:07.072623    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:07.072623    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:07.072623    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:07.072623    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:07.072623    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:07 GMT
	I0428 16:32:07.072623    4440 round_trippers.go:580]     Audit-Id: fb2c98b2-2fed-4eb1-9448-6389e85d88bb
	I0428 16:32:07.074053    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w4tmj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"6373f137-e7ed-49b8-91bb-fb26c74db65e","resourceVersion":"615","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"9bf6718e-ba1f-4ba5-91ec-ec220bac317e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bf6718e-ba1f-4ba5-91ec-ec220bac317e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0428 16:32:07.075050    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:07.075050    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:07.075154    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:07.075154    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:07.078988    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:07.079826    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:07.079826    4440 round_trippers.go:580]     Audit-Id: 1644f066-929e-4514-9194-a1d213db67aa
	I0428 16:32:07.079826    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:07.079826    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:07.079826    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:07.079826    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:07.079826    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:07 GMT
	I0428 16:32:07.080963    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:07.080963    4440 pod_ready.go:92] pod "coredns-7db6d8ff4d-w4tmj" in "kube-system" namespace has status "Ready":"True"
	I0428 16:32:07.080963    4440 pod_ready.go:81] duration metric: took 14.284ms for pod "coredns-7db6d8ff4d-w4tmj" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:07.080963    4440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-285400" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:07.081967    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/etcd-functional-285400
	I0428 16:32:07.081967    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:07.081967    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:07.082971    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:07.093653    4440 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0428 16:32:07.093850    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:07.093850    4440 round_trippers.go:580]     Audit-Id: 1ccedde3-8856-4e3e-aa81-604b1bf34e5c
	I0428 16:32:07.093850    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:07.093850    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:07.093850    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:07.093850    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:07.093945    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:07 GMT
	I0428 16:32:07.094803    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-285400","namespace":"kube-system","uid":"c13e206d-870c-452b-9505-0ea9d8fda928","resourceVersion":"619","creationTimestamp":"2024-04-28T23:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.228.231:2379","kubernetes.io/config.hash":"35dedd627fdfea3b9aff90de42393f4a","kubernetes.io/config.mirror":"35dedd627fdfea3b9aff90de42393f4a","kubernetes.io/config.seen":"2024-04-28T23:29:12.420375049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6380 chars]
	I0428 16:32:07.094882    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:07.094882    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:07.094882    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:07.094882    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:07.099515    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:32:07.099803    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:07.099803    4440 round_trippers.go:580]     Audit-Id: 86c65058-c3e4-4c46-8e56-2dcd3fc1ac86
	I0428 16:32:07.099803    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:07.099803    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:07.099803    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:07.099803    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:07.099941    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:07 GMT
	I0428 16:32:07.101043    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:07.101451    4440 pod_ready.go:92] pod "etcd-functional-285400" in "kube-system" namespace has status "Ready":"True"
	I0428 16:32:07.101451    4440 pod_ready.go:81] duration metric: took 20.4875ms for pod "etcd-functional-285400" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:07.101451    4440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-285400" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:07.102053    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-285400
	I0428 16:32:07.102116    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:07.102116    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:07.102313    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:07.109974    4440 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0428 16:32:07.109974    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:07.109974    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:07.110072    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:07.110072    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:07.110072    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:07 GMT
	I0428 16:32:07.110072    4440 round_trippers.go:580]     Audit-Id: f2d13023-8b57-4742-9b2c-c37f4dfbace8
	I0428 16:32:07.110165    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:07.111141    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-285400","namespace":"kube-system","uid":"324ce332-5282-441c-8e35-d056cd19c5d5","resourceVersion":"623","creationTimestamp":"2024-04-28T23:29:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.228.231:8441","kubernetes.io/config.hash":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.mirror":"f291e154417b21ff4db6980bc8535b89","kubernetes.io/config.seen":"2024-04-28T23:29:12.420385849Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8164 chars]
	I0428 16:32:07.112332    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:07.112332    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:07.112332    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:07.112332    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:07.117110    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:32:07.117604    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:07.117604    4440 round_trippers.go:580]     Audit-Id: 636bdaac-d11d-4496-9f44-56d654f23838
	I0428 16:32:07.117604    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:07.117604    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:07.117604    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:07.117604    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:07.117604    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:07 GMT
	I0428 16:32:07.117604    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:07.118091    4440 pod_ready.go:92] pod "kube-apiserver-functional-285400" in "kube-system" namespace has status "Ready":"True"
	I0428 16:32:07.118091    4440 pod_ready.go:81] duration metric: took 16.6398ms for pod "kube-apiserver-functional-285400" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:07.118091    4440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-285400" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:07.118091    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-285400
	I0428 16:32:07.118091    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:07.118091    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:07.118091    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:07.128114    4440 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0428 16:32:07.128403    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:07.128403    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:07.128403    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:07 GMT
	I0428 16:32:07.128403    4440 round_trippers.go:580]     Audit-Id: 872de64e-1b15-4674-86c6-fafc2a2a94bd
	I0428 16:32:07.128403    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:07.128403    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:07.128403    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:07.129289    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-285400","namespace":"kube-system","uid":"ba66ef53-0af6-4e90-8992-67b81a6352f3","resourceVersion":"618","creationTimestamp":"2024-04-28T23:29:12Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b58dd1d0fd407dad600e27e9ada9e50d","kubernetes.io/config.mirror":"b58dd1d0fd407dad600e27e9ada9e50d","kubernetes.io/config.seen":"2024-04-28T23:29:12.420369448Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7729 chars]
	I0428 16:32:07.318907    4440 request.go:629] Waited for 188.0315ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:07.319046    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:07.319160    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:07.319160    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:07.319205    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:07.322704    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:07.322704    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:07.322704    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:07.322704    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:07.322704    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:07.322704    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:07 GMT
	I0428 16:32:07.322704    4440 round_trippers.go:580]     Audit-Id: d508baf2-5c93-4aca-9d6b-a1f13aae1421
	I0428 16:32:07.322704    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:07.324013    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:07.324374    4440 pod_ready.go:92] pod "kube-controller-manager-functional-285400" in "kube-system" namespace has status "Ready":"True"
	I0428 16:32:07.324524    4440 pod_ready.go:81] duration metric: took 206.433ms for pod "kube-controller-manager-functional-285400" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:07.324524    4440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cmcmh" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:07.507595    4440 request.go:629] Waited for 182.8172ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-proxy-cmcmh
	I0428 16:32:07.507595    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-proxy-cmcmh
	I0428 16:32:07.507595    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:07.507595    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:07.507836    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:07.510950    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:07.511392    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:07.511556    4440 round_trippers.go:580]     Audit-Id: 1d6d2738-4f4b-4c60-a185-a6a36a547390
	I0428 16:32:07.511556    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:07.511556    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:07.511556    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:07.511556    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:07.511556    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:07 GMT
	I0428 16:32:07.511937    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cmcmh","generateName":"kube-proxy-","namespace":"kube-system","uid":"d6b1cdcd-2edc-4615-bc60-36b8c54196f3","resourceVersion":"614","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a7aad6d8-1e92-43d4-8eba-edbaa82c04c3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a7aad6d8-1e92-43d4-8eba-edbaa82c04c3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6266 chars]
	I0428 16:32:07.714830    4440 request.go:629] Waited for 202.0885ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:07.714968    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:07.715029    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:07.715029    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:07.715029    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:07.719604    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:32:07.719604    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:07.719604    4440 round_trippers.go:580]     Audit-Id: 2002d89c-695b-4950-8f7b-4eae27b29891
	I0428 16:32:07.719604    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:07.719707    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:07.719707    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:07.719707    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:07.719707    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:07 GMT
	I0428 16:32:07.720144    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:07.720922    4440 pod_ready.go:92] pod "kube-proxy-cmcmh" in "kube-system" namespace has status "Ready":"True"
	I0428 16:32:07.721025    4440 pod_ready.go:81] duration metric: took 396.5002ms for pod "kube-proxy-cmcmh" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:07.721098    4440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-285400" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:07.920182    4440 request.go:629] Waited for 198.998ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-285400
	I0428 16:32:07.920182    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-285400
	I0428 16:32:07.920182    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:07.920182    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:07.920182    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:07.925409    4440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 16:32:07.925984    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:07.925984    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:07 GMT
	I0428 16:32:07.925984    4440 round_trippers.go:580]     Audit-Id: dee8cd3a-cf0a-410b-b299-fa2aaaba1036
	I0428 16:32:07.925984    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:07.925984    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:07.925984    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:07.925984    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:07.926240    4440 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-285400","namespace":"kube-system","uid":"9864aa7e-5dc8-4cbc-be11-dbd4ed9b1532","resourceVersion":"631","creationTimestamp":"2024-04-28T23:29:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.mirror":"7c3ab530a70c8e5869577096cc8a6009","kubernetes.io/config.seen":"2024-04-28T23:29:04.446839393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5207 chars]
	I0428 16:32:08.110758    4440 request.go:629] Waited for 183.7985ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:08.111159    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes/functional-285400
	I0428 16:32:08.111159    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:08.111159    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:08.111159    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:08.115771    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:32:08.116143    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:08.116143    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:08.116143    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:08.116143    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:08.116143    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:08 GMT
	I0428 16:32:08.116143    4440 round_trippers.go:580]     Audit-Id: a9f00a16-d6ae-4b14-bf6d-de27b50f78b0
	I0428 16:32:08.116143    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:08.116446    4440 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-28T23:29:09Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0428 16:32:08.116925    4440 pod_ready.go:92] pod "kube-scheduler-functional-285400" in "kube-system" namespace has status "Ready":"True"
	I0428 16:32:08.116992    4440 pod_ready.go:81] duration metric: took 395.8937ms for pod "kube-scheduler-functional-285400" in "kube-system" namespace to be "Ready" ...
	I0428 16:32:08.116992    4440 pod_ready.go:38] duration metric: took 1.0659543s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 16:32:08.117057    4440 api_server.go:52] waiting for apiserver process to appear ...
	I0428 16:32:08.129718    4440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 16:32:08.160999    4440 command_runner.go:130] > 5745
	I0428 16:32:08.161090    4440 api_server.go:72] duration metric: took 1.4854652s to wait for apiserver process to appear ...
	I0428 16:32:08.161090    4440 api_server.go:88] waiting for apiserver healthz status ...
	I0428 16:32:08.161090    4440 api_server.go:253] Checking apiserver healthz at https://172.27.228.231:8441/healthz ...
	I0428 16:32:08.168387    4440 api_server.go:279] https://172.27.228.231:8441/healthz returned 200:
	ok
	I0428 16:32:08.168928    4440 round_trippers.go:463] GET https://172.27.228.231:8441/version
	I0428 16:32:08.168928    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:08.169013    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:08.169013    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:08.170925    4440 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0428 16:32:08.170925    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:08.171013    4440 round_trippers.go:580]     Content-Length: 263
	I0428 16:32:08.171013    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:08 GMT
	I0428 16:32:08.171013    4440 round_trippers.go:580]     Audit-Id: 620935a3-5d81-49c3-8a14-4f807481ad12
	I0428 16:32:08.171098    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:08.171098    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:08.171098    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:08.171167    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:08.171206    4440 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0428 16:32:08.171363    4440 api_server.go:141] control plane version: v1.30.0
	I0428 16:32:08.171401    4440 api_server.go:131] duration metric: took 10.3113ms to wait for apiserver health ...
	I0428 16:32:08.171477    4440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0428 16:32:08.315523    4440 request.go:629] Waited for 143.9149ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods
	I0428 16:32:08.315833    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods
	I0428 16:32:08.315934    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:08.315934    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:08.315934    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:08.321531    4440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 16:32:08.321637    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:08.321637    4440 round_trippers.go:580]     Audit-Id: 72b4f862-ff8f-463d-aa57-cdb4f0a058ff
	I0428 16:32:08.321637    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:08.321637    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:08.321637    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:08.321637    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:08.321637    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:08 GMT
	I0428 16:32:08.322620    4440 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"631"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-w4tmj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"6373f137-e7ed-49b8-91bb-fb26c74db65e","resourceVersion":"615","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"9bf6718e-ba1f-4ba5-91ec-ec220bac317e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bf6718e-ba1f-4ba5-91ec-ec220bac317e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50865 chars]
	I0428 16:32:08.325414    4440 system_pods.go:59] 7 kube-system pods found
	I0428 16:32:08.325414    4440 system_pods.go:61] "coredns-7db6d8ff4d-w4tmj" [6373f137-e7ed-49b8-91bb-fb26c74db65e] Running
	I0428 16:32:08.325414    4440 system_pods.go:61] "etcd-functional-285400" [c13e206d-870c-452b-9505-0ea9d8fda928] Running
	I0428 16:32:08.325572    4440 system_pods.go:61] "kube-apiserver-functional-285400" [324ce332-5282-441c-8e35-d056cd19c5d5] Running
	I0428 16:32:08.325572    4440 system_pods.go:61] "kube-controller-manager-functional-285400" [ba66ef53-0af6-4e90-8992-67b81a6352f3] Running
	I0428 16:32:08.325572    4440 system_pods.go:61] "kube-proxy-cmcmh" [d6b1cdcd-2edc-4615-bc60-36b8c54196f3] Running
	I0428 16:32:08.325572    4440 system_pods.go:61] "kube-scheduler-functional-285400" [9864aa7e-5dc8-4cbc-be11-dbd4ed9b1532] Running
	I0428 16:32:08.325572    4440 system_pods.go:61] "storage-provisioner" [eafdcc52-b6b9-490f-b7aa-1999dbc4a6a8] Running
	I0428 16:32:08.325639    4440 system_pods.go:74] duration metric: took 154.0352ms to wait for pod list to return data ...
	I0428 16:32:08.325639    4440 default_sa.go:34] waiting for default service account to be created ...
	I0428 16:32:08.519675    4440 request.go:629] Waited for 193.7249ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.228.231:8441/api/v1/namespaces/default/serviceaccounts
	I0428 16:32:08.519675    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/default/serviceaccounts
	I0428 16:32:08.519675    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:08.519866    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:08.519866    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:08.524272    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:32:08.524591    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:08.524591    4440 round_trippers.go:580]     Audit-Id: 149d87cc-14db-4e7b-be29-3cd3b299d791
	I0428 16:32:08.524591    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:08.524758    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:08.524758    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:08.524758    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:08.524758    4440 round_trippers.go:580]     Content-Length: 261
	I0428 16:32:08.524758    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:08 GMT
	I0428 16:32:08.524758    4440 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"631"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"94f5236f-1ac1-4f8e-9372-469602bc9160","resourceVersion":"334","creationTimestamp":"2024-04-28T23:29:26Z"}}]}
	I0428 16:32:08.525422    4440 default_sa.go:45] found service account: "default"
	I0428 16:32:08.525527    4440 default_sa.go:55] duration metric: took 199.8884ms for default service account to be created ...
	I0428 16:32:08.525527    4440 system_pods.go:116] waiting for k8s-apps to be running ...
	I0428 16:32:08.709253    4440 request.go:629] Waited for 183.3765ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods
	I0428 16:32:08.709379    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods
	I0428 16:32:08.709379    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:08.709379    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:08.709379    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:08.715746    4440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 16:32:08.715746    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:08.715746    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:08.715867    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:08.715867    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:08 GMT
	I0428 16:32:08.715867    4440 round_trippers.go:580]     Audit-Id: e88fbcf1-46a7-4fcc-8130-84efbf3f4738
	I0428 16:32:08.715867    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:08.715867    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:08.717215    4440 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"631"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-w4tmj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"6373f137-e7ed-49b8-91bb-fb26c74db65e","resourceVersion":"615","creationTimestamp":"2024-04-28T23:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"9bf6718e-ba1f-4ba5-91ec-ec220bac317e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-28T23:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bf6718e-ba1f-4ba5-91ec-ec220bac317e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50865 chars]
	I0428 16:32:08.718907    4440 system_pods.go:86] 7 kube-system pods found
	I0428 16:32:08.718907    4440 system_pods.go:89] "coredns-7db6d8ff4d-w4tmj" [6373f137-e7ed-49b8-91bb-fb26c74db65e] Running
	I0428 16:32:08.718907    4440 system_pods.go:89] "etcd-functional-285400" [c13e206d-870c-452b-9505-0ea9d8fda928] Running
	I0428 16:32:08.718907    4440 system_pods.go:89] "kube-apiserver-functional-285400" [324ce332-5282-441c-8e35-d056cd19c5d5] Running
	I0428 16:32:08.718907    4440 system_pods.go:89] "kube-controller-manager-functional-285400" [ba66ef53-0af6-4e90-8992-67b81a6352f3] Running
	I0428 16:32:08.718907    4440 system_pods.go:89] "kube-proxy-cmcmh" [d6b1cdcd-2edc-4615-bc60-36b8c54196f3] Running
	I0428 16:32:08.718907    4440 system_pods.go:89] "kube-scheduler-functional-285400" [9864aa7e-5dc8-4cbc-be11-dbd4ed9b1532] Running
	I0428 16:32:08.718907    4440 system_pods.go:89] "storage-provisioner" [eafdcc52-b6b9-490f-b7aa-1999dbc4a6a8] Running
	I0428 16:32:08.718907    4440 system_pods.go:126] duration metric: took 193.3796ms to wait for k8s-apps to be running ...
	I0428 16:32:08.719941    4440 system_svc.go:44] waiting for kubelet service to be running ....
	I0428 16:32:08.734456    4440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 16:32:08.762917    4440 system_svc.go:56] duration metric: took 42.9754ms WaitForService to wait for kubelet
	I0428 16:32:08.762917    4440 kubeadm.go:576] duration metric: took 2.087382s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 16:32:08.762917    4440 node_conditions.go:102] verifying NodePressure condition ...
	I0428 16:32:08.838151    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:32:08.838151    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:32:08.838475    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:32:08.838475    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:32:08.842407    4440 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 16:32:08.838475    4440 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 16:32:08.842557    4440 kapi.go:59] client config for functional-285400: &rest.Config{Host:"https://172.27.228.231:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-285400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-285400\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 16:32:08.845004    4440 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 16:32:08.845004    4440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0428 16:32:08.845004    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:32:08.845684    4440 addons.go:234] Setting addon default-storageclass=true in "functional-285400"
	W0428 16:32:08.845684    4440 addons.go:243] addon default-storageclass should already be in state true
	I0428 16:32:08.845684    4440 host.go:66] Checking if "functional-285400" exists ...
	I0428 16:32:08.847140    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:32:08.913716    4440 request.go:629] Waited for 150.6468ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.228.231:8441/api/v1/nodes
	I0428 16:32:08.913775    4440 round_trippers.go:463] GET https://172.27.228.231:8441/api/v1/nodes
	I0428 16:32:08.913775    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:08.913775    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:08.913775    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:08.918386    4440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 16:32:08.918386    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:08.918386    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:08 GMT
	I0428 16:32:08.918386    4440 round_trippers.go:580]     Audit-Id: 5cf6b8a9-0551-48ac-b9be-dc5ea29585e8
	I0428 16:32:08.918386    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:08.918386    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:08.918386    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:08.918565    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:08.918851    4440 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"631"},"items":[{"metadata":{"name":"functional-285400","uid":"9b9e2189-de4c-4dec-8aba-b7ceaca00113","resourceVersion":"543","creationTimestamp":"2024-04-28T23:29:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-285400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"functional-285400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T16_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0428 16:32:08.919485    4440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 16:32:08.919485    4440 node_conditions.go:123] node cpu capacity is 2
	I0428 16:32:08.919581    4440 node_conditions.go:105] duration metric: took 156.5683ms to run NodePressure ...
	I0428 16:32:08.919581    4440 start.go:240] waiting for startup goroutines ...
	I0428 16:32:10.927246    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:32:10.927246    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:32:10.927246    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:32:10.964397    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:32:10.964516    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:32:10.964737    4440 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0428 16:32:10.964737    4440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0428 16:32:10.964798    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:32:13.084251    4440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:32:13.084462    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:32:13.084547    4440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:32:13.437412    4440 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:32:13.437597    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:32:13.437773    4440 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
	I0428 16:32:13.593214    4440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 16:32:14.453813    4440 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0428 16:32:14.454566    4440 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0428 16:32:14.454566    4440 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0428 16:32:14.454691    4440 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0428 16:32:14.454691    4440 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0428 16:32:14.454732    4440 command_runner.go:130] > pod/storage-provisioner configured
	I0428 16:32:15.608157    4440 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:32:15.608157    4440 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:32:15.608157    4440 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
	I0428 16:32:15.742670    4440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0428 16:32:15.920449    4440 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0428 16:32:15.920832    4440 round_trippers.go:463] GET https://172.27.228.231:8441/apis/storage.k8s.io/v1/storageclasses
	I0428 16:32:15.920903    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:15.920903    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:15.920962    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:15.923509    4440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 16:32:15.923509    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:15.923509    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:15.923509    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:15.923509    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:15.923509    4440 round_trippers.go:580]     Content-Length: 1273
	I0428 16:32:15.923509    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:15 GMT
	I0428 16:32:15.923509    4440 round_trippers.go:580]     Audit-Id: 6a66e787-751c-41e0-824f-27ed614422b8
	I0428 16:32:15.923509    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:15.923509    4440 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"638"},"items":[{"metadata":{"name":"standard","uid":"0329c664-8a8e-4915-b5c1-309f87c9dde0","resourceVersion":"433","creationTimestamp":"2024-04-28T23:29:36Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-28T23:29:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0428 16:32:15.924557    4440 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"0329c664-8a8e-4915-b5c1-309f87c9dde0","resourceVersion":"433","creationTimestamp":"2024-04-28T23:29:36Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-28T23:29:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0428 16:32:15.924557    4440 round_trippers.go:463] PUT https://172.27.228.231:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0428 16:32:15.924557    4440 round_trippers.go:469] Request Headers:
	I0428 16:32:15.924557    4440 round_trippers.go:473]     Accept: application/json, */*
	I0428 16:32:15.924557    4440 round_trippers.go:473]     Content-Type: application/json
	I0428 16:32:15.924557    4440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 16:32:15.927562    4440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 16:32:15.928569    4440 round_trippers.go:577] Response Headers:
	I0428 16:32:15.928569    4440 round_trippers.go:580]     Audit-Id: 630cf152-4c63-47dc-a9b0-d9be9cbd3b12
	I0428 16:32:15.928569    4440 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 16:32:15.928569    4440 round_trippers.go:580]     Content-Type: application/json
	I0428 16:32:15.928641    4440 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb86e9c-61b8-4d30-abb1-e75d5db2b7d8
	I0428 16:32:15.928641    4440 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cad0afb1-131b-48c7-97f8-835e10cdc532
	I0428 16:32:15.928641    4440 round_trippers.go:580]     Content-Length: 1220
	I0428 16:32:15.928680    4440 round_trippers.go:580]     Date: Sun, 28 Apr 2024 23:32:15 GMT
	I0428 16:32:15.928770    4440 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"0329c664-8a8e-4915-b5c1-309f87c9dde0","resourceVersion":"433","creationTimestamp":"2024-04-28T23:29:36Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-28T23:29:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0428 16:32:15.934521    4440 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0428 16:32:15.936974    4440 addons.go:505] duration metric: took 9.2615821s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0428 16:32:15.937092    4440 start.go:245] waiting for cluster config update ...
	I0428 16:32:15.937092    4440 start.go:254] writing updated cluster config ...
	I0428 16:32:15.949376    4440 ssh_runner.go:195] Run: rm -f paused
	I0428 16:32:16.099133    4440 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0428 16:32:16.102832    4440 out.go:177] * Done! kubectl is now configured to use "functional-285400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.898499590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.898863066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.942176864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.942483443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.942588436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.943029007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.024012602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.024070699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.024082098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.024310184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:31:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d79f63518700b60d650faf6eafaca752f0654a73da372540b9f6449f3446e518/resolv.conf as [nameserver 172.27.224.1]"
	Apr 28 23:31:53 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:31:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64884080de2ca36c0cdaf609e669a3ad00e1608a76621cf03e2dbca2cbec0712/resolv.conf as [nameserver 172.27.224.1]"
	Apr 28 23:31:53 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:31:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c5b56189153d37c39b3c0d51303ff00a335ae49fbdf6c42d91e93f9a2c4c8247/resolv.conf as [nameserver 172.27.224.1]"
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.421904057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.424619589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.424902972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.425330145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.527027338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.535874777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.535912375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.536023268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.768505131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.768864908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.768896706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.769020698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2bb72a14bc213       cbb01a7bd410d       2 minutes ago       Running             coredns                   1                   c5b56189153d3       coredns-7db6d8ff4d-w4tmj
	68a91ddb28289       6e38f40d628db       2 minutes ago       Running             storage-provisioner       1                   64884080de2ca       storage-provisioner
	adf0e04b0c300       a0bf559e280cf       2 minutes ago       Running             kube-proxy                2                   d79f63518700b       kube-proxy-cmcmh
	ad09f3881d270       3861cfcd7c04c       2 minutes ago       Running             etcd                      2                   7fae71c72bf2b       etcd-functional-285400
	3d99d3b4a2452       259c8277fcbbc       2 minutes ago       Running             kube-scheduler            2                   ad8616b3f34ca       kube-scheduler-functional-285400
	a0de6e012e89d       c7aad43836fa5       2 minutes ago       Running             kube-controller-manager   2                   ad1c9573c2179       kube-controller-manager-functional-285400
	e12ec17c7cd66       c42f13656d0b2       2 minutes ago       Running             kube-apiserver            2                   6bbce171d9313       kube-apiserver-functional-285400
	4ed6581dd266f       c42f13656d0b2       2 minutes ago       Created             kube-apiserver            1                   b37acf5d47076       kube-apiserver-functional-285400
	d944ce960b21e       c7aad43836fa5       2 minutes ago       Created             kube-controller-manager   1                   9d061e1398da2       kube-controller-manager-functional-285400
	0a13487c372a6       3861cfcd7c04c       2 minutes ago       Exited              etcd                      1                   d57ac6a873278       etcd-functional-285400
	433fcffb54c95       a0bf559e280cf       2 minutes ago       Created             kube-proxy                1                   cd5d493f46dd8       kube-proxy-cmcmh
	9d14cad0dcbb4       259c8277fcbbc       2 minutes ago       Exited              kube-scheduler            1                   d09c631e65fbb       kube-scheduler-functional-285400
	8f29a8fbd5b24       6e38f40d628db       4 minutes ago       Exited              storage-provisioner       0                   36a11974a0fdc       storage-provisioner
	cbf5b97235b0d       cbb01a7bd410d       4 minutes ago       Exited              coredns                   0                   3291d76a665ca       coredns-7db6d8ff4d-w4tmj
	
	
	==> coredns [2bb72a14bc21] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55854 - 4707 "HINFO IN 1038415499612833656.2764333435354966399. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.044633891s
	
	
	==> coredns [cbf5b97235b0] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1671368767]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (28-Apr-2024 23:29:29.290) (total time: 30000ms):
	Trace[1671368767]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:29:59.290)
	Trace[1671368767]: [30.000817822s] [30.000817822s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[75866407]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (28-Apr-2024 23:29:29.290) (total time: 30000ms):
	Trace[75866407]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:29:59.291)
	Trace[75866407]: [30.000760057s] [30.000760057s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[413527412]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (28-Apr-2024 23:29:29.291) (total time: 30001ms):
	Trace[413527412]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (23:29:59.292)
	Trace[413527412]: [30.001429523s] [30.001429523s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	[INFO] Reloading complete
	[INFO] 127.0.0.1:38069 - 18841 "HINFO IN 6645359543455800314.2736690233227377849. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.040538881s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-285400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-285400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=functional-285400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T16_29_13_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Apr 2024 23:29:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-285400
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 28 Apr 2024 23:33:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 28 Apr 2024 23:33:53 +0000   Sun, 28 Apr 2024 23:29:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 28 Apr 2024 23:33:53 +0000   Sun, 28 Apr 2024 23:29:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 28 Apr 2024 23:33:53 +0000   Sun, 28 Apr 2024 23:29:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 28 Apr 2024 23:33:53 +0000   Sun, 28 Apr 2024 23:29:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.228.231
	  Hostname:    functional-285400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 f10841805f0c4e3399b22e8e7bd50466
	  System UUID:                25de6e6d-2255-e54d-86e0-20531d6f5992
	  Boot ID:                    4c4d878a-860a-408c-b9fb-a49c47ea9aff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-w4tmj                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m29s
	  kube-system                 etcd-functional-285400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-apiserver-functional-285400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-controller-manager-functional-285400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-proxy-cmcmh                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 kube-scheduler-functional-285400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m26s                  kube-proxy       
	  Normal  Starting                 2m2s                   kube-proxy       
	  Normal  NodeHasSufficientPID     4m52s (x7 over 4m52s)  kubelet          Node functional-285400 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    4m52s (x8 over 4m52s)  kubelet          Node functional-285400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m52s (x8 over 4m52s)  kubelet          Node functional-285400 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m44s                  kubelet          Node functional-285400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m44s                  kubelet          Node functional-285400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m44s                  kubelet          Node functional-285400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m38s                  kubelet          Node functional-285400 status is now: NodeReady
	  Normal  RegisteredNode           4m30s                  node-controller  Node functional-285400 event: Registered Node functional-285400 in Controller
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node functional-285400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node functional-285400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x7 over 2m11s)  kubelet          Node functional-285400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           113s                   node-controller  Node functional-285400 event: Registered Node functional-285400 in Controller
	
	
	==> dmesg <==
	[  +5.334044] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.694611] systemd-fstab-generator[1523]: Ignoring "noauto" option for root device
	[Apr28 23:29] systemd-fstab-generator[1732]: Ignoring "noauto" option for root device
	[  +0.117984] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.052783] systemd-fstab-generator[2131]: Ignoring "noauto" option for root device
	[  +0.148408] kauditd_printk_skb: 62 callbacks suppressed
	[ +15.453966] systemd-fstab-generator[2367]: Ignoring "noauto" option for root device
	[  +0.220302] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.841050] kauditd_printk_skb: 88 callbacks suppressed
	[Apr28 23:30] kauditd_printk_skb: 10 callbacks suppressed
	[Apr28 23:31] systemd-fstab-generator[3771]: Ignoring "noauto" option for root device
	[  +0.670661] systemd-fstab-generator[3807]: Ignoring "noauto" option for root device
	[  +0.288124] systemd-fstab-generator[3819]: Ignoring "noauto" option for root device
	[  +0.300389] systemd-fstab-generator[3833]: Ignoring "noauto" option for root device
	[  +5.385888] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.904050] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.210204] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.203000] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.314169] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +0.870536] systemd-fstab-generator[4645]: Ignoring "noauto" option for root device
	[  +3.273751] kauditd_printk_skb: 182 callbacks suppressed
	[  +1.459361] systemd-fstab-generator[5366]: Ignoring "noauto" option for root device
	[  +7.557283] kauditd_printk_skb: 53 callbacks suppressed
	[Apr28 23:32] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.770893] systemd-fstab-generator[6392]: Ignoring "noauto" option for root device
	
	
	==> etcd [0a13487c372a] <==
	{"level":"warn","ts":"2024-04-28T23:31:42.889606Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-04-28T23:31:42.889787Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.27.228.231:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.27.228.231:2380","--initial-cluster=functional-285400=https://172.27.228.231:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.27.228.231:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.27.228.231:2380","--name=functional-285400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=1000
0","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-04-28T23:31:42.890012Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-04-28T23:31:42.890046Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-04-28T23:31:42.890057Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.27.228.231:2380"]}
	{"level":"info","ts":"2024-04-28T23:31:42.890088Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-28T23:31:42.892226Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.27.228.231:2379"]}
	{"level":"info","ts":"2024-04-28T23:31:42.892373Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-285400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.27.228.231:2380"],"listen-peer-urls":["https://172.27.228.231:2380"],"advertise-client-urls":["https://172.27.228.231:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.228.231:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initi
al-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-04-28T23:31:42.915046Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"22.34868ms"}
	
	
	==> etcd [ad09f3881d27] <==
	{"level":"info","ts":"2024-04-28T23:31:47.885849Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-28T23:31:47.886286Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-28T23:31:47.890906Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b1eacb503433fb56 switched to configuration voters=(12820282834597182294)"}
	{"level":"info","ts":"2024-04-28T23:31:47.891457Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9c4c3efff2762349","local-member-id":"b1eacb503433fb56","added-peer-id":"b1eacb503433fb56","added-peer-peer-urls":["https://172.27.228.231:2380"]}
	{"level":"info","ts":"2024-04-28T23:31:47.898261Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9c4c3efff2762349","local-member-id":"b1eacb503433fb56","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-28T23:31:47.898627Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-28T23:31:47.903487Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-28T23:31:47.903779Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.27.228.231:2380"}
	{"level":"info","ts":"2024-04-28T23:31:47.907146Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.27.228.231:2380"}
	{"level":"info","ts":"2024-04-28T23:31:47.909525Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b1eacb503433fb56","initial-advertise-peer-urls":["https://172.27.228.231:2380"],"listen-peer-urls":["https://172.27.228.231:2380"],"advertise-client-urls":["https://172.27.228.231:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.228.231:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-28T23:31:47.912286Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-28T23:31:49.025068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b1eacb503433fb56 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-28T23:31:49.02539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b1eacb503433fb56 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-28T23:31:49.025676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b1eacb503433fb56 received MsgPreVoteResp from b1eacb503433fb56 at term 2"}
	{"level":"info","ts":"2024-04-28T23:31:49.025838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b1eacb503433fb56 became candidate at term 3"}
	{"level":"info","ts":"2024-04-28T23:31:49.026161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b1eacb503433fb56 received MsgVoteResp from b1eacb503433fb56 at term 3"}
	{"level":"info","ts":"2024-04-28T23:31:49.026209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b1eacb503433fb56 became leader at term 3"}
	{"level":"info","ts":"2024-04-28T23:31:49.026223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b1eacb503433fb56 elected leader b1eacb503433fb56 at term 3"}
	{"level":"info","ts":"2024-04-28T23:31:49.03832Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b1eacb503433fb56","local-member-attributes":"{Name:functional-285400 ClientURLs:[https://172.27.228.231:2379]}","request-path":"/0/members/b1eacb503433fb56/attributes","cluster-id":"9c4c3efff2762349","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-28T23:31:49.038782Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-28T23:31:49.038806Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-28T23:31:49.038818Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-28T23:31:49.042961Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-28T23:31:49.039084Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-28T23:31:49.047839Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.228.231:2379"}
	
	
	==> kernel <==
	 23:33:56 up 6 min,  0 users,  load average: 0.49, 0.54, 0.27
	Linux functional-285400 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4ed6581dd266] <==
	
	
	==> kube-apiserver [e12ec17c7cd6] <==
	I0428 23:31:50.747593       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0428 23:31:50.748280       1 aggregator.go:165] initial CRD sync complete...
	I0428 23:31:50.748486       1 autoregister_controller.go:141] Starting autoregister controller
	I0428 23:31:50.748684       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0428 23:31:50.805960       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0428 23:31:50.806444       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0428 23:31:50.806534       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0428 23:31:50.826991       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0428 23:31:50.829462       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0428 23:31:50.829658       1 policy_source.go:224] refreshing policies
	I0428 23:31:50.842354       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0428 23:31:50.842661       1 shared_informer.go:320] Caches are synced for configmaps
	I0428 23:31:50.847246       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0428 23:31:50.849762       1 cache.go:39] Caches are synced for autoregister controller
	I0428 23:31:50.854905       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0428 23:31:50.884205       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0428 23:31:51.617725       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0428 23:31:52.133391       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.228.231]
	I0428 23:31:52.136054       1 controller.go:615] quota admission added evaluator for: endpoints
	I0428 23:31:52.149337       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0428 23:31:52.794164       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0428 23:31:52.825754       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0428 23:31:52.898932       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0428 23:31:52.995551       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0428 23:31:53.009408       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [a0de6e012e89] <==
	I0428 23:32:03.902575       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0428 23:32:03.916033       1 shared_informer.go:320] Caches are synced for node
	I0428 23:32:03.916255       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0428 23:32:03.916402       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0428 23:32:03.916414       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0428 23:32:03.916421       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0428 23:32:03.920850       1 shared_informer.go:320] Caches are synced for namespace
	I0428 23:32:03.933620       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0428 23:32:03.935965       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0428 23:32:03.940741       1 shared_informer.go:320] Caches are synced for GC
	I0428 23:32:03.970620       1 shared_informer.go:320] Caches are synced for service account
	I0428 23:32:03.972071       1 shared_informer.go:320] Caches are synced for deployment
	I0428 23:32:03.988230       1 shared_informer.go:320] Caches are synced for disruption
	I0428 23:32:04.014200       1 shared_informer.go:320] Caches are synced for resource quota
	I0428 23:32:04.049692       1 shared_informer.go:320] Caches are synced for ephemeral
	I0428 23:32:04.053382       1 shared_informer.go:320] Caches are synced for PV protection
	I0428 23:32:04.056182       1 shared_informer.go:320] Caches are synced for PVC protection
	I0428 23:32:04.085355       1 shared_informer.go:320] Caches are synced for expand
	I0428 23:32:04.099937       1 shared_informer.go:320] Caches are synced for persistent volume
	I0428 23:32:04.103086       1 shared_informer.go:320] Caches are synced for stateful set
	I0428 23:32:04.108879       1 shared_informer.go:320] Caches are synced for resource quota
	I0428 23:32:04.152853       1 shared_informer.go:320] Caches are synced for attach detach
	I0428 23:32:04.568689       1 shared_informer.go:320] Caches are synced for garbage collector
	I0428 23:32:04.568790       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0428 23:32:04.575939       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [d944ce960b21] <==
	
	
	==> kube-proxy [433fcffb54c9] <==
	
	
	==> kube-proxy [adf0e04b0c30] <==
	I0428 23:31:53.693923       1 server_linux.go:69] "Using iptables proxy"
	I0428 23:31:53.706645       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.228.231"]
	I0428 23:31:53.814514       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0428 23:31:53.814562       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0428 23:31:53.814579       1 server_linux.go:165] "Using iptables Proxier"
	I0428 23:31:53.820502       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0428 23:31:53.821068       1 server.go:872] "Version info" version="v1.30.0"
	I0428 23:31:53.821354       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0428 23:31:53.825206       1 config.go:192] "Starting service config controller"
	I0428 23:31:53.825287       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0428 23:31:53.825363       1 config.go:101] "Starting endpoint slice config controller"
	I0428 23:31:53.825374       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0428 23:31:53.825941       1 config.go:319] "Starting node config controller"
	I0428 23:31:53.825949       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0428 23:31:53.926333       1 shared_informer.go:320] Caches are synced for node config
	I0428 23:31:53.926775       1 shared_informer.go:320] Caches are synced for service config
	I0428 23:31:53.926796       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3d99d3b4a245] <==
	W0428 23:31:50.764310       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0428 23:31:50.765304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0428 23:31:50.790741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0428 23:31:50.790888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0428 23:31:50.791045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0428 23:31:50.791174       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0428 23:31:50.791413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0428 23:31:50.791501       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0428 23:31:50.791645       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0428 23:31:50.791727       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0428 23:31:50.791869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0428 23:31:50.791947       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0428 23:31:50.792974       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0428 23:31:50.793134       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0428 23:31:50.793546       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0428 23:31:50.793853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0428 23:31:50.794070       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0428 23:31:50.794226       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0428 23:31:50.795202       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0428 23:31:50.796465       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0428 23:31:50.795906       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0428 23:31:50.797515       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0428 23:31:50.797925       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0428 23:31:50.798034       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0428 23:31:51.646749       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [9d14cad0dcbb] <==
	I0428 23:31:43.444753       1 serving.go:380] Generated self-signed cert in-memory
	W0428 23:31:43.942227       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 172.27.228.231:8441: connect: connection refused
	W0428 23:31:43.942326       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0428 23:31:43.942337       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0428 23:31:43.946458       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0428 23:31:43.946569       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0428 23:31:43.948901       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0428 23:31:43.948976       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0428 23:31:43.949802       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0428 23:31:43.948995       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0428 23:31:43.950086       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0428 23:31:43.950436       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 28 23:31:51 functional-285400 kubelet[5373]: E0428 23:31:51.809233    5373 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Apr 28 23:31:51 functional-285400 kubelet[5373]: E0428 23:31:51.809385    5373 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6373f137-e7ed-49b8-91bb-fb26c74db65e-config-volume podName:6373f137-e7ed-49b8-91bb-fb26c74db65e nodeName:}" failed. No retries permitted until 2024-04-28 23:31:52.309361351 +0000 UTC m=+6.717926933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6373f137-e7ed-49b8-91bb-fb26c74db65e-config-volume") pod "coredns-7db6d8ff4d-w4tmj" (UID: "6373f137-e7ed-49b8-91bb-fb26c74db65e") : failed to sync configmap cache: timed out waiting for the condition
	Apr 28 23:31:51 functional-285400 kubelet[5373]: E0428 23:31:51.809590    5373 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Apr 28 23:31:51 functional-285400 kubelet[5373]: E0428 23:31:51.809634    5373 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d6b1cdcd-2edc-4615-bc60-36b8c54196f3-kube-proxy podName:d6b1cdcd-2edc-4615-bc60-36b8c54196f3 nodeName:}" failed. No retries permitted until 2024-04-28 23:31:52.309624732 +0000 UTC m=+6.718190314 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/d6b1cdcd-2edc-4615-bc60-36b8c54196f3-kube-proxy") pod "kube-proxy-cmcmh" (UID: "d6b1cdcd-2edc-4615-bc60-36b8c54196f3") : failed to sync configmap cache: timed out waiting for the condition
	Apr 28 23:31:51 functional-285400 kubelet[5373]: E0428 23:31:51.931924    5373 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Apr 28 23:31:51 functional-285400 kubelet[5373]: E0428 23:31:51.932136    5373 projected.go:200] Error preparing data for projected volume kube-api-access-6ngwg for pod kube-system/kube-proxy-cmcmh: failed to sync configmap cache: timed out waiting for the condition
	Apr 28 23:31:51 functional-285400 kubelet[5373]: E0428 23:31:51.932276    5373 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d6b1cdcd-2edc-4615-bc60-36b8c54196f3-kube-api-access-6ngwg podName:d6b1cdcd-2edc-4615-bc60-36b8c54196f3 nodeName:}" failed. No retries permitted until 2024-04-28 23:31:52.43225532 +0000 UTC m=+6.840820902 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6ngwg" (UniqueName: "kubernetes.io/projected/d6b1cdcd-2edc-4615-bc60-36b8c54196f3-kube-api-access-6ngwg") pod "kube-proxy-cmcmh" (UID: "d6b1cdcd-2edc-4615-bc60-36b8c54196f3") : failed to sync configmap cache: timed out waiting for the condition
	Apr 28 23:31:51 functional-285400 kubelet[5373]: E0428 23:31:51.932622    5373 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Apr 28 23:31:51 functional-285400 kubelet[5373]: E0428 23:31:51.932735    5373 projected.go:200] Error preparing data for projected volume kube-api-access-cxjqz for pod kube-system/storage-provisioner: failed to sync configmap cache: timed out waiting for the condition
	Apr 28 23:31:51 functional-285400 kubelet[5373]: E0428 23:31:51.932893    5373 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eafdcc52-b6b9-490f-b7aa-1999dbc4a6a8-kube-api-access-cxjqz podName:eafdcc52-b6b9-490f-b7aa-1999dbc4a6a8 nodeName:}" failed. No retries permitted until 2024-04-28 23:31:52.432879575 +0000 UTC m=+6.841445157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cxjqz" (UniqueName: "kubernetes.io/projected/eafdcc52-b6b9-490f-b7aa-1999dbc4a6a8-kube-api-access-cxjqz") pod "storage-provisioner" (UID: "eafdcc52-b6b9-490f-b7aa-1999dbc4a6a8") : failed to sync configmap cache: timed out waiting for the condition
	Apr 28 23:31:51 functional-285400 kubelet[5373]: E0428 23:31:51.932700    5373 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Apr 28 23:31:51 functional-285400 kubelet[5373]: E0428 23:31:51.933151    5373 projected.go:200] Error preparing data for projected volume kube-api-access-68kgt for pod kube-system/coredns-7db6d8ff4d-w4tmj: failed to sync configmap cache: timed out waiting for the condition
	Apr 28 23:31:51 functional-285400 kubelet[5373]: E0428 23:31:51.933294    5373 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6373f137-e7ed-49b8-91bb-fb26c74db65e-kube-api-access-68kgt podName:6373f137-e7ed-49b8-91bb-fb26c74db65e nodeName:}" failed. No retries permitted until 2024-04-28 23:31:52.433281846 +0000 UTC m=+6.841847528 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-68kgt" (UniqueName: "kubernetes.io/projected/6373f137-e7ed-49b8-91bb-fb26c74db65e-kube-api-access-68kgt") pod "coredns-7db6d8ff4d-w4tmj" (UID: "6373f137-e7ed-49b8-91bb-fb26c74db65e") : failed to sync configmap cache: timed out waiting for the condition
	Apr 28 23:31:55 functional-285400 kubelet[5373]: I0428 23:31:55.989350    5373 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 28 23:31:57 functional-285400 kubelet[5373]: I0428 23:31:57.078165    5373 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 28 23:32:45 functional-285400 kubelet[5373]: E0428 23:32:45.878421    5373 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 28 23:32:45 functional-285400 kubelet[5373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 28 23:32:45 functional-285400 kubelet[5373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 28 23:32:45 functional-285400 kubelet[5373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 28 23:32:45 functional-285400 kubelet[5373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 28 23:33:45 functional-285400 kubelet[5373]: E0428 23:33:45.875272    5373 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 28 23:33:45 functional-285400 kubelet[5373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 28 23:33:45 functional-285400 kubelet[5373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 28 23:33:45 functional-285400 kubelet[5373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 28 23:33:45 functional-285400 kubelet[5373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [68a91ddb2828] <==
	I0428 23:31:53.702764       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0428 23:31:53.743980       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0428 23:31:53.744029       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0428 23:32:11.160039       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0428 23:32:11.160592       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-285400_c05be318-df80-4fbc-92e3-d7dee46ebd11!
	I0428 23:32:11.160347       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f5a281e-df16-4ad6-a2bd-41bfea9b9ea5", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-285400_c05be318-df80-4fbc-92e3-d7dee46ebd11 became leader
	I0428 23:32:11.261644       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-285400_c05be318-df80-4fbc-92e3-d7dee46ebd11!
	
	
	==> storage-provisioner [8f29a8fbd5b2] <==
	I0428 23:29:36.009398       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0428 23:29:36.023746       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0428 23:29:36.024111       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0428 23:29:36.045467       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0428 23:29:36.046516       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-285400_b3fcc891-ec2e-42de-b2f5-dd6abf327f56!
	I0428 23:29:36.045795       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f5a281e-df16-4ad6-a2bd-41bfea9b9ea5", APIVersion:"v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-285400_b3fcc891-ec2e-42de-b2f5-dd6abf327f56 became leader
	I0428 23:29:36.147291       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-285400_b3fcc891-ec2e-42de-b2f5-dd6abf327f56!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:33:48.567356   12312 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400: (11.4199399s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-285400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (32.67s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (277.7s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-285400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0428 16:35:36.417326    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-285400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 90 (2m25.1175789s)

                                                
                                                
-- stdout --
	* [functional-285400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17977
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "functional-285400" primary control-plane node in "functional-285400" cluster
	* Updating the running hyperv "functional-285400" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:34:09.668586    5336 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 28 23:28:08 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	Apr 28 23:28:08 functional-285400 dockerd[669]: time="2024-04-28T23:28:08.458031073Z" level=info msg="Starting up"
	Apr 28 23:28:08 functional-285400 dockerd[669]: time="2024-04-28T23:28:08.459132842Z" level=info msg="containerd not running, starting managed containerd"
	Apr 28 23:28:08 functional-285400 dockerd[669]: time="2024-04-28T23:28:08.460004117Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.500839567Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526294849Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526404946Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526466044Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526481344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526545242Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526674239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526852634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.527002029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.527060828Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.527176124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.527267922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.527554014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.534676013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.534792010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.535161999Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.535266996Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.535432692Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.535495990Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.535511190Z" level=info msg="metadata content store policy set" policy=shared
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562162539Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562292435Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562319034Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562337134Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562354533Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562556428Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563132211Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563340805Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563443403Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563467302Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563484301Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563501301Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563516800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563533200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563560899Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563676996Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563821392Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563843891Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563869391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563885990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563903890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564003687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564039386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564070885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564122983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564137283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564150983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564177082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564191981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564206881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564220081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564238980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564262979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564277079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564291079Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564347177Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564386676Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564401276Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564412475Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564674868Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564695067Z" level=info msg="NRI interface is disabled by configuration."
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.565010258Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.565255551Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.565408647Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.565538844Z" level=info msg="containerd successfully booted in 0.066369s"
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.531334331Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.562701703Z" level=info msg="Loading containers: start."
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.834821291Z" level=info msg="Loading containers: done."
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.861786023Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.862018421Z" level=info msg="Daemon has completed initialization"
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.978533892Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.978695591Z" level=info msg="API listen on [::]:2376"
	Apr 28 23:28:09 functional-285400 systemd[1]: Started Docker Application Container Engine.
	Apr 28 23:28:39 functional-285400 dockerd[669]: time="2024-04-28T23:28:39.460834317Z" level=info msg="Processing signal 'terminated'"
	Apr 28 23:28:39 functional-285400 dockerd[669]: time="2024-04-28T23:28:39.462423349Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 28 23:28:39 functional-285400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 28 23:28:39 functional-285400 dockerd[669]: time="2024-04-28T23:28:39.464231485Z" level=info msg="Daemon shutdown complete"
	Apr 28 23:28:39 functional-285400 dockerd[669]: time="2024-04-28T23:28:39.464287086Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 28 23:28:39 functional-285400 dockerd[669]: time="2024-04-28T23:28:39.464310087Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 28 23:28:40 functional-285400 systemd[1]: docker.service: Deactivated successfully.
	Apr 28 23:28:40 functional-285400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 28 23:28:40 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	Apr 28 23:28:40 functional-285400 dockerd[1034]: time="2024-04-28T23:28:40.547302815Z" level=info msg="Starting up"
	Apr 28 23:28:40 functional-285400 dockerd[1034]: time="2024-04-28T23:28:40.548969049Z" level=info msg="containerd not running, starting managed containerd"
	Apr 28 23:28:40 functional-285400 dockerd[1034]: time="2024-04-28T23:28:40.553355337Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1040
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.586424804Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.617787336Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.617895638Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.617948039Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.617964140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618012441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618030041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618213945Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618305847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618326847Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618337547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618363248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618517051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.621562612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.621828618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622133524Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622226726Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622258026Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622275327Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622292827Z" level=info msg="metadata content store policy set" policy=shared
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622421630Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622472831Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622490631Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622505331Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622519532Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622570833Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623109643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623296347Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623417750Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623440650Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623465851Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623513652Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623546552Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623561353Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623575753Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623589153Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623602153Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623615954Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623636754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623670555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623786457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623809758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623822858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623859659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623874159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623886959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623900059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623917660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623929760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623941360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623955061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623971561Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624098263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624200065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624224266Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624352369Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624423470Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624471871Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624489271Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624582273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624619874Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624633274Z" level=info msg="NRI interface is disabled by configuration."
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.625329088Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.625558393Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.625897400Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.625964801Z" level=info msg="containerd successfully booted in 0.041442s"
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.594527123Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.619764932Z" level=info msg="Loading containers: start."
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.794928563Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.874085758Z" level=info msg="Loading containers: done."
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.898483250Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.898544351Z" level=info msg="Daemon has completed initialization"
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.953742164Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 28 23:28:41 functional-285400 systemd[1]: Started Docker Application Container Engine.
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.955514199Z" level=info msg="API listen on [::]:2376"
	Apr 28 23:28:50 functional-285400 dockerd[1034]: time="2024-04-28T23:28:50.694528243Z" level=info msg="Processing signal 'terminated'"
	Apr 28 23:28:50 functional-285400 dockerd[1034]: time="2024-04-28T23:28:50.696757388Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 28 23:28:50 functional-285400 dockerd[1034]: time="2024-04-28T23:28:50.697046394Z" level=info msg="Daemon shutdown complete"
	Apr 28 23:28:50 functional-285400 dockerd[1034]: time="2024-04-28T23:28:50.697107895Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 28 23:28:50 functional-285400 dockerd[1034]: time="2024-04-28T23:28:50.697114195Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 28 23:28:50 functional-285400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 28 23:28:51 functional-285400 systemd[1]: docker.service: Deactivated successfully.
	Apr 28 23:28:51 functional-285400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 28 23:28:51 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	Apr 28 23:28:51 functional-285400 dockerd[1338]: time="2024-04-28T23:28:51.790116226Z" level=info msg="Starting up"
	Apr 28 23:28:51 functional-285400 dockerd[1338]: time="2024-04-28T23:28:51.791109646Z" level=info msg="containerd not running, starting managed containerd"
	Apr 28 23:28:51 functional-285400 dockerd[1338]: time="2024-04-28T23:28:51.792225068Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1344
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.825171932Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853462002Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853595705Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853779609Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853806409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853838510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853862510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854053614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854152916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854174617Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854185817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854212017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854337420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857038074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857145077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857304280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857393782Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857423682Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857442283Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857453083Z" level=info msg="metadata content store policy set" policy=shared
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857739389Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857796290Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857815490Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857832190Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857847291Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857899992Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858234699Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858391302Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858411502Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858425702Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858445003Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858461503Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858475403Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858489304Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858522104Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858552805Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858582406Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858612006Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858634407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858830211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858877111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858893812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858909412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858924712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858937713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858969413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859060615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859091916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859106816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859121016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859135417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859153317Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859178318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859193918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859207518Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859270719Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859290720Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859303420Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859315720Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859393622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859417022Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859428023Z" level=info msg="NRI interface is disabled by configuration."
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859748329Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859907232Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859989034Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.860011234Z" level=info msg="containerd successfully booted in 0.036080s"
	Apr 28 23:28:53 functional-285400 dockerd[1338]: time="2024-04-28T23:28:53.261253278Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 28 23:28:55 functional-285400 dockerd[1338]: time="2024-04-28T23:28:55.954551264Z" level=info msg="Loading containers: start."
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.146101525Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.231907754Z" level=info msg="Loading containers: done."
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.256426148Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.256552251Z" level=info msg="Daemon has completed initialization"
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.302071268Z" level=info msg="API listen on [::]:2376"
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.302246672Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 28 23:28:56 functional-285400 systemd[1]: Started Docker Application Container Engine.
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.250283526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.250430432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.250528135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.252113295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.334584914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.334655716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.334669617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.334815522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.367942175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.368010078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.368026478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.368111581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.404412954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.405670802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.405917811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.406508433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.643175982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.643396891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.643565597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.643948611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.768173509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.768340015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.768361816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.769060742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.899868788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.899974992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.899993793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.901334044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.901512951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.901622555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.901452248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.902130574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:27 functional-285400 dockerd[1344]: time="2024-04-28T23:29:27.735735186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:27 functional-285400 dockerd[1344]: time="2024-04-28T23:29:27.735972992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:27 functional-285400 dockerd[1344]: time="2024-04-28T23:29:27.736000893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:27 functional-285400 dockerd[1344]: time="2024-04-28T23:29:27.736804912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.012009031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.012102533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.012121734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.012333639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.221516592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.221985704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.222033705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.222175808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.989878612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.990062316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.990158018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.990385723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.021060231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.021189529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.021252427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.021897214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.102986348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.103111445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.103127345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.103233143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.635772700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.636219292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.636323190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.636619984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.919962564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.920236658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.920349456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.920534153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:39 functional-285400 dockerd[1338]: time="2024-04-28T23:29:39.294187745Z" level=info msg="ignoring event" container=8d5e97cbfab6ecb55c7862a379fabef8ee4c3bf6b88c924ec55e9d674f8200ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.295781617Z" level=info msg="shim disconnected" id=8d5e97cbfab6ecb55c7862a379fabef8ee4c3bf6b88c924ec55e9d674f8200ff namespace=moby
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.295917214Z" level=warning msg="cleaning up after shim disconnected" id=8d5e97cbfab6ecb55c7862a379fabef8ee4c3bf6b88c924ec55e9d674f8200ff namespace=moby
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.295934314Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:29:39 functional-285400 dockerd[1338]: time="2024-04-28T23:29:39.481116050Z" level=info msg="ignoring event" container=2e56d97fbdc237cc232a1a800036e067a2e3c0003a37c3517318684e03c08b17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.485795968Z" level=info msg="shim disconnected" id=2e56d97fbdc237cc232a1a800036e067a2e3c0003a37c3517318684e03c08b17 namespace=moby
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.486559854Z" level=warning msg="cleaning up after shim disconnected" id=2e56d97fbdc237cc232a1a800036e067a2e3c0003a37c3517318684e03c08b17 namespace=moby
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.486618353Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.050160482Z" level=info msg="Processing signal 'terminated'"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.210355165Z" level=info msg="shim disconnected" id=8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.211313483Z" level=info msg="ignoring event" container=8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.211741190Z" level=warning msg="cleaning up after shim disconnected" id=8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.211776991Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.297470133Z" level=info msg="shim disconnected" id=0df4de5342babdd3ce0b681ffb7e4a6d6754e626768a7d8a9cad7b6e7701d7c6 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.297556235Z" level=warning msg="cleaning up after shim disconnected" id=0df4de5342babdd3ce0b681ffb7e4a6d6754e626768a7d8a9cad7b6e7701d7c6 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.297573535Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.297957642Z" level=info msg="ignoring event" container=0df4de5342babdd3ce0b681ffb7e4a6d6754e626768a7d8a9cad7b6e7701d7c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.329295406Z" level=info msg="ignoring event" container=d60d61f6290488b2f433e5aa8390867f66da81ec16d56be0658ab3b13cf731c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.332690667Z" level=info msg="shim disconnected" id=d60d61f6290488b2f433e5aa8390867f66da81ec16d56be0658ab3b13cf731c5 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.332789669Z" level=warning msg="cleaning up after shim disconnected" id=d60d61f6290488b2f433e5aa8390867f66da81ec16d56be0658ab3b13cf731c5 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.332804069Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.375837044Z" level=info msg="ignoring event" container=917e469fc278599bde79f1d86be05e59228d066ab844f2b1e4ad13463b80726b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.376572557Z" level=info msg="shim disconnected" id=917e469fc278599bde79f1d86be05e59228d066ab844f2b1e4ad13463b80726b namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.376680259Z" level=warning msg="cleaning up after shim disconnected" id=917e469fc278599bde79f1d86be05e59228d066ab844f2b1e4ad13463b80726b namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.376694659Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.405635180Z" level=info msg="ignoring event" container=d4f34492bd3b0f286baea8cb4ac3122ae01bf1b823fa92eda9a29be625347083 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.407489013Z" level=info msg="shim disconnected" id=d4f34492bd3b0f286baea8cb4ac3122ae01bf1b823fa92eda9a29be625347083 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.409482949Z" level=warning msg="cleaning up after shim disconnected" id=d4f34492bd3b0f286baea8cb4ac3122ae01bf1b823fa92eda9a29be625347083 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.410230562Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.414879946Z" level=info msg="shim disconnected" id=86ed10ca148a30c368742748979f60f6a1f0263bdcec99fdd57b3befb7c1b49b namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.423219296Z" level=warning msg="cleaning up after shim disconnected" id=86ed10ca148a30c368742748979f60f6a1f0263bdcec99fdd57b3befb7c1b49b namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.423265697Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.430282023Z" level=info msg="ignoring event" container=86ed10ca148a30c368742748979f60f6a1f0263bdcec99fdd57b3befb7c1b49b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.432475463Z" level=info msg="ignoring event" container=393441639d880519bc5e8a238d8ae1824e6cff15a5fba113ef99db2284388fa2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.432776868Z" level=info msg="ignoring event" container=36a11974a0fdc771c2c02eaaa8a6c463237e87b664be484c510d1923c55c478d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.433309078Z" level=info msg="ignoring event" container=76cb8f18544b6a74ef3a4068db9415d2a18fc06006a5481fa73e3ce4aef9ec60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.433462081Z" level=info msg="ignoring event" container=7c1efde2e1d06cf3cb390f0588f1792d93287331b4b1fefbd47755a9d4591e77 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.433585883Z" level=info msg="ignoring event" container=3291d76a665ca7306204ff005404f611201b95281d170b3d45b03674581cfd93 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.433872288Z" level=info msg="shim disconnected" id=4142c8b3542b7f1679578dfb73963b8b685025bd23d1e59460779c5d9f603275 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.434020091Z" level=warning msg="cleaning up after shim disconnected" id=4142c8b3542b7f1679578dfb73963b8b685025bd23d1e59460779c5d9f603275 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.434216994Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.438849678Z" level=info msg="shim disconnected" id=3291d76a665ca7306204ff005404f611201b95281d170b3d45b03674581cfd93 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.441729729Z" level=info msg="ignoring event" container=4142c8b3542b7f1679578dfb73963b8b685025bd23d1e59460779c5d9f603275 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.441899732Z" level=warning msg="cleaning up after shim disconnected" id=3291d76a665ca7306204ff005404f611201b95281d170b3d45b03674581cfd93 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.442304640Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.423077594Z" level=info msg="shim disconnected" id=36a11974a0fdc771c2c02eaaa8a6c463237e87b664be484c510d1923c55c478d namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.452785628Z" level=warning msg="cleaning up after shim disconnected" id=36a11974a0fdc771c2c02eaaa8a6c463237e87b664be484c510d1923c55c478d namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.452931531Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.441442824Z" level=info msg="shim disconnected" id=76cb8f18544b6a74ef3a4068db9415d2a18fc06006a5481fa73e3ce4aef9ec60 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.459901156Z" level=warning msg="cleaning up after shim disconnected" id=76cb8f18544b6a74ef3a4068db9415d2a18fc06006a5481fa73e3ce4aef9ec60 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.459958557Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.423209896Z" level=info msg="shim disconnected" id=393441639d880519bc5e8a238d8ae1824e6cff15a5fba113ef99db2284388fa2 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.460989076Z" level=warning msg="cleaning up after shim disconnected" id=393441639d880519bc5e8a238d8ae1824e6cff15a5fba113ef99db2284388fa2 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.461061877Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.441488125Z" level=info msg="shim disconnected" id=7c1efde2e1d06cf3cb390f0588f1792d93287331b4b1fefbd47755a9d4591e77 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.471877672Z" level=warning msg="cleaning up after shim disconnected" id=7c1efde2e1d06cf3cb390f0588f1792d93287331b4b1fefbd47755a9d4591e77 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.471899572Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:31 functional-285400 dockerd[1338]: time="2024-04-28T23:31:31.229543896Z" level=info msg="ignoring event" container=cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:31 functional-285400 dockerd[1344]: time="2024-04-28T23:31:31.230517013Z" level=info msg="shim disconnected" id=cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36 namespace=moby
	Apr 28 23:31:31 functional-285400 dockerd[1344]: time="2024-04-28T23:31:31.230888320Z" level=warning msg="cleaning up after shim disconnected" id=cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36 namespace=moby
	Apr 28 23:31:31 functional-285400 dockerd[1344]: time="2024-04-28T23:31:31.231009422Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.148962964Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=e945fb6ccd0b4af579b9a35a822159fe44d4b44c2b60e54871e3c27f61b19127
	Apr 28 23:31:36 functional-285400 dockerd[1344]: time="2024-04-28T23:31:36.189588096Z" level=info msg="shim disconnected" id=e945fb6ccd0b4af579b9a35a822159fe44d4b44c2b60e54871e3c27f61b19127 namespace=moby
	Apr 28 23:31:36 functional-285400 dockerd[1344]: time="2024-04-28T23:31:36.189747061Z" level=warning msg="cleaning up after shim disconnected" id=e945fb6ccd0b4af579b9a35a822159fe44d4b44c2b60e54871e3c27f61b19127 namespace=moby
	Apr 28 23:31:36 functional-285400 dockerd[1344]: time="2024-04-28T23:31:36.189763458Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.190664659Z" level=info msg="ignoring event" container=e945fb6ccd0b4af579b9a35a822159fe44d4b44c2b60e54871e3c27f61b19127 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.261851744Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.262765043Z" level=info msg="Daemon shutdown complete"
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.262851424Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.262896614Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 28 23:31:37 functional-285400 systemd[1]: docker.service: Deactivated successfully.
	Apr 28 23:31:37 functional-285400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 28 23:31:37 functional-285400 systemd[1]: docker.service: Consumed 6.131s CPU time.
	Apr 28 23:31:37 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	Apr 28 23:31:37 functional-285400 dockerd[4241]: time="2024-04-28T23:31:37.352537531Z" level=info msg="Starting up"
	Apr 28 23:31:37 functional-285400 dockerd[4241]: time="2024-04-28T23:31:37.353712889Z" level=info msg="containerd not running, starting managed containerd"
	Apr 28 23:31:37 functional-285400 dockerd[4241]: time="2024-04-28T23:31:37.357319447Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4247
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.401843683Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430334119Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430470591Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430538977Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430556673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430586367Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430599864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430810221Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430966789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430990084Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.431001881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.431029576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.431299420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.434861687Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435029652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435251907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435493957Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435639227Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435795295Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435927967Z" level=info msg="metadata content store policy set" policy=shared
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436442961Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436693910Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436762096Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436789190Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436805787Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436862575Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.437674608Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.437887164Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.437996642Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438065527Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438082824Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438139412Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438180204Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438201499Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438217796Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438231993Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438245790Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438258588Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438285582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438302879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438448948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438553927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438576522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438592419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438611115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439014032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439231088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439281377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439302773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439322169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439350263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439438745Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439572017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439596512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439611009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439751380Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439853859Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439874255Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439888252Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439970035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440019425Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440035322Z" level=info msg="NRI interface is disabled by configuration."
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440468833Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440766771Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440875149Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440943935Z" level=info msg="containerd successfully booted in 0.040342s"
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.401217622Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.427439091Z" level=info msg="Loading containers: start."
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.712437419Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.790582628Z" level=info msg="Loading containers: done."
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.823815253Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.823979921Z" level=info msg="Daemon has completed initialization"
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.871316341Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.872491515Z" level=info msg="API listen on [::]:2376"
	Apr 28 23:31:38 functional-285400 systemd[1]: Started Docker Application Container Engine.
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.543496994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.544361160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.544512137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.547577062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.572864745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.572969229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.572998724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.573088910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.728876980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.728953068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.728966866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.729073250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.831430795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.831853130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.831959713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.832160482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.856317241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.856679185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.857317886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.859001825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.259745248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.259845434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.259864431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.259971516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.534393076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.534497761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.534513358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.547783046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.595301000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.595477974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.595498771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.595655249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.708389805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.708605074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.708626271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.709006117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.806658846Z" level=info msg="shim disconnected" id=433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.807183471Z" level=warning msg="cleaning up after shim disconnected" id=433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.807200568Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.858415589Z" level=info msg="shim disconnected" id=d944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17 namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.858538671Z" level=warning msg="cleaning up after shim disconnected" id=d944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17 namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.858571867Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4241]: time="2024-04-28T23:31:42.869856441Z" level=info msg="ignoring event" container=9d061e1398da210ccedc55558b9715b005785f793704a3a1963900016b9748a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.902833989Z" level=info msg="shim disconnected" id=9d061e1398da210ccedc55558b9715b005785f793704a3a1963900016b9748a7 namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.902915077Z" level=warning msg="cleaning up after shim disconnected" id=9d061e1398da210ccedc55558b9715b005785f793704a3a1963900016b9748a7 namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.902931675Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4241]: time="2024-04-28T23:31:42.903222133Z" level=info msg="ignoring event" container=cd5d493f46dd815db87370558658470061272f346c4f8aea960387a4269afb1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.923033679Z" level=info msg="shim disconnected" id=cd5d493f46dd815db87370558658470061272f346c4f8aea960387a4269afb1a namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.923142063Z" level=warning msg="cleaning up after shim disconnected" id=cd5d493f46dd815db87370558658470061272f346c4f8aea960387a4269afb1a namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.923163460Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4241]: time="2024-04-28T23:31:42.956354178Z" level=info msg="ignoring event" container=b37acf5d4707644e999567de42840e4212fd14e4bd844ec31b16a59374f992ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.972312578Z" level=info msg="shim disconnected" id=b37acf5d4707644e999567de42840e4212fd14e4bd844ec31b16a59374f992ce namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.972455358Z" level=warning msg="cleaning up after shim disconnected" id=b37acf5d4707644e999567de42840e4212fd14e4bd844ec31b16a59374f992ce namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.972470655Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4241]: time="2024-04-28T23:31:42.983928705Z" level=info msg="ignoring event" container=d57ac6a873278bf29480cd3567e4c210fd7ba99e9fd5a27385a48e16a1d563ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.984146873Z" level=info msg="shim disconnected" id=d57ac6a873278bf29480cd3567e4c210fd7ba99e9fd5a27385a48e16a1d563ba namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.984252758Z" level=warning msg="cleaning up after shim disconnected" id=d57ac6a873278bf29480cd3567e4c210fd7ba99e9fd5a27385a48e16a1d563ba namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.984288153Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.025338783Z" level=warning msg="cleanup warnings time=\"2024-04-28T23:31:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.026537423Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.030181335Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.032231460Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.079937870Z" level=info msg="shim disconnected" id=d09c631e65fbb41cdd3967bc41988c2c732dded5f2bd5a04ce97aeff890200b1 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.080182137Z" level=warning msg="cleaning up after shim disconnected" id=d09c631e65fbb41cdd3967bc41988c2c732dded5f2bd5a04ce97aeff890200b1 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.080205634Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.081005627Z" level=info msg="ignoring event" container=d09c631e65fbb41cdd3967bc41988c2c732dded5f2bd5a04ce97aeff890200b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.085859677Z" level=error msg="Handler for POST /v1.44/containers/433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff/start returned error: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: can't get final child's PID from pipe: EOF: unknown" spanID=8ad30501e61944fe traceID=66f53489c2432281a08c4bc29e4312c9
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.121869753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.122354688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.122524265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.122720539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.147718390Z" level=info msg="ignoring event" container=0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.149400465Z" level=info msg="shim disconnected" id=0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.150239553Z" level=warning msg="cleaning up after shim disconnected" id=0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.150373835Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.171475408Z" level=warning msg="cleanup warnings time=\"2024-04-28T23:31:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.172979107Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.176251668Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.176394949Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.339488902Z" level=info msg="shim disconnected" id=4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.339994035Z" level=warning msg="cleaning up after shim disconnected" id=4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.340155613Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.403635810Z" level=warning msg="cleanup warnings time=\"2024-04-28T23:31:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.404940335Z" level=error msg="copy shim log" error="read /proc/self/fd/54: file already closed" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.407200532Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.407416303Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.971959781Z" level=info msg="shim disconnected" id=9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.972329331Z" level=warning msg="cleaning up after shim disconnected" id=9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.972351428Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.972842463Z" level=info msg="ignoring event" container=9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.642163458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.642367636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.642386434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.643453820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.764353959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.764582435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.764642428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.764854405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.787639863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.787728753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.787754650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.788579862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.808345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.808445832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.808466030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.808586517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.988155667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.988238058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.988266755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.988363045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.184804623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.184871316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.184887815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.185058998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.410994645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.411068838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.411085436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.411219123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.465947784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.468984982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.469353845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.470454036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.898256907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.898472992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.898499590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.898863066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.942176864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.942483443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.942588436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.943029007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.024012602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.024070699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.024082098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.024310184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.421904057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.424619589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.424902972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.425330145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.527027338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.535874777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.535912375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.536023268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.768505131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.768864908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.768896706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.769020698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:33:55 functional-285400 dockerd[4241]: 2024/04/28 23:33:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:55 functional-285400 dockerd[4241]: 2024/04/28 23:33:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:35:23 functional-285400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.123028412Z" level=info msg="Processing signal 'terminated'"
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.345669340Z" level=info msg="ignoring event" container=c5b56189153d37c39b3c0d51303ff00a335ae49fbdf6c42d91e93f9a2c4c8247 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.347673073Z" level=info msg="shim disconnected" id=c5b56189153d37c39b3c0d51303ff00a335ae49fbdf6c42d91e93f9a2c4c8247 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.348070979Z" level=warning msg="cleaning up after shim disconnected" id=c5b56189153d37c39b3c0d51303ff00a335ae49fbdf6c42d91e93f9a2c4c8247 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.348453486Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.352464851Z" level=info msg="ignoring event" container=ad8616b3f34ca3cec0f6ed11ebf1cc497fec4969b58f0b7cbbcb034c4cb10b20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.354749688Z" level=info msg="shim disconnected" id=ad8616b3f34ca3cec0f6ed11ebf1cc497fec4969b58f0b7cbbcb034c4cb10b20 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.354868590Z" level=warning msg="cleaning up after shim disconnected" id=ad8616b3f34ca3cec0f6ed11ebf1cc497fec4969b58f0b7cbbcb034c4cb10b20 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.354929691Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.376243738Z" level=info msg="shim disconnected" id=6bbce171d93135edb19e516531563602e22d6615480e8870ec06606de0414eec namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.377019051Z" level=info msg="ignoring event" container=7fae71c72bf2b1513a7bd3c90c36c9b8a9e51404f82db25e3a4018d8bc43465d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.378375573Z" level=info msg="ignoring event" container=6bbce171d93135edb19e516531563602e22d6615480e8870ec06606de0414eec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.379960199Z" level=warning msg="cleaning up after shim disconnected" id=6bbce171d93135edb19e516531563602e22d6615480e8870ec06606de0414eec namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.377715062Z" level=info msg="shim disconnected" id=7fae71c72bf2b1513a7bd3c90c36c9b8a9e51404f82db25e3a4018d8bc43465d namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.380354305Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.381968332Z" level=info msg="ignoring event" container=a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.382252136Z" level=warning msg="cleaning up after shim disconnected" id=7fae71c72bf2b1513a7bd3c90c36c9b8a9e51404f82db25e3a4018d8bc43465d namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.382450940Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.386605907Z" level=info msg="ignoring event" container=ad1c9573c21797b0ea472d18eeab8bc045d1533420a69ba0bde8f07d3ebbf6ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.382860546Z" level=info msg="shim disconnected" id=a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.387495422Z" level=warning msg="cleaning up after shim disconnected" id=a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.387550123Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.399093511Z" level=info msg="shim disconnected" id=ad1c9573c21797b0ea472d18eeab8bc045d1533420a69ba0bde8f07d3ebbf6ef namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.399158312Z" level=warning msg="cleaning up after shim disconnected" id=ad1c9573c21797b0ea472d18eeab8bc045d1533420a69ba0bde8f07d3ebbf6ef namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.399171812Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.420709563Z" level=info msg="shim disconnected" id=3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.420806765Z" level=info msg="ignoring event" container=d79f63518700b60d650faf6eafaca752f0654a73da372540b9f6449f3446e518 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.420854266Z" level=info msg="ignoring event" container=adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.420887266Z" level=info msg="ignoring event" container=3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.420907066Z" level=info msg="ignoring event" container=ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.421020568Z" level=warning msg="cleaning up after shim disconnected" id=3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.421148570Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.422060685Z" level=info msg="shim disconnected" id=64884080de2ca36c0cdaf609e669a3ad00e1608a76621cf03e2dbca2cbec0712 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.422118786Z" level=warning msg="cleaning up after shim disconnected" id=64884080de2ca36c0cdaf609e669a3ad00e1608a76621cf03e2dbca2cbec0712 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.422131686Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.422961500Z" level=info msg="ignoring event" container=64884080de2ca36c0cdaf609e669a3ad00e1608a76621cf03e2dbca2cbec0712 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.421195971Z" level=info msg="shim disconnected" id=d79f63518700b60d650faf6eafaca752f0654a73da372540b9f6449f3446e518 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.432400254Z" level=warning msg="cleaning up after shim disconnected" id=d79f63518700b60d650faf6eafaca752f0654a73da372540b9f6449f3446e518 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.432526256Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.421224872Z" level=info msg="shim disconnected" id=adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.441078695Z" level=warning msg="cleaning up after shim disconnected" id=adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.441179397Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.453847703Z" level=info msg="shim disconnected" id=68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.454096307Z" level=warning msg="cleaning up after shim disconnected" id=68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.454303211Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.421094069Z" level=info msg="shim disconnected" id=ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.459714899Z" level=info msg="ignoring event" container=68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.469619460Z" level=warning msg="cleaning up after shim disconnected" id=ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.469817163Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:28 functional-285400 dockerd[4247]: time="2024-04-28T23:35:28.211392336Z" level=info msg="shim disconnected" id=2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e namespace=moby
	Apr 28 23:35:28 functional-285400 dockerd[4247]: time="2024-04-28T23:35:28.212023546Z" level=warning msg="cleaning up after shim disconnected" id=2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e namespace=moby
	Apr 28 23:35:28 functional-285400 dockerd[4247]: time="2024-04-28T23:35:28.212157849Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:28 functional-285400 dockerd[4241]: time="2024-04-28T23:35:28.227767503Z" level=info msg="ignoring event" container=2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.248337262Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e
	Apr 28 23:35:33 functional-285400 dockerd[4247]: time="2024-04-28T23:35:33.318093701Z" level=info msg="shim disconnected" id=e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e namespace=moby
	Apr 28 23:35:33 functional-285400 dockerd[4247]: time="2024-04-28T23:35:33.318153701Z" level=warning msg="cleaning up after shim disconnected" id=e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e namespace=moby
	Apr 28 23:35:33 functional-285400 dockerd[4247]: time="2024-04-28T23:35:33.318164501Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.317892002Z" level=info msg="ignoring event" container=e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.381933870Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.383411262Z" level=info msg="Daemon shutdown complete"
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.383553862Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.383583462Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 28 23:35:34 functional-285400 systemd[1]: docker.service: Deactivated successfully.
	Apr 28 23:35:34 functional-285400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 28 23:35:34 functional-285400 systemd[1]: docker.service: Consumed 9.880s CPU time.
	Apr 28 23:35:34 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	Apr 28 23:35:34 functional-285400 dockerd[8393]: time="2024-04-28T23:35:34.465411917Z" level=info msg="Starting up"
	Apr 28 23:36:34 functional-285400 dockerd[8393]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 28 23:36:34 functional-285400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 28 23:36:34 functional-285400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 28 23:36:34 functional-285400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-285400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 90
functional_test.go:757: restart took 2m25.3125711s for "functional-285400" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-285400 -n functional-285400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-285400 -n functional-285400: exit status 2 (11.1210557s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:36:34.993545    9996 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 logs -n 25
E0428 16:36:59.572947    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 logs -n 25: (1m49.413215s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-906500 --log_dir                                                  | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:24 PDT | 28 Apr 24 16:24 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-906500 --log_dir                                                  | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:24 PDT | 28 Apr 24 16:24 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-906500 --log_dir                                                  | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:24 PDT | 28 Apr 24 16:24 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-906500 --log_dir                                                  | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:24 PDT | 28 Apr 24 16:25 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-906500 --log_dir                                                  | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:25 PDT | 28 Apr 24 16:25 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-906500 --log_dir                                                  | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:25 PDT | 28 Apr 24 16:25 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-906500                                                         | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:25 PDT | 28 Apr 24 16:26 PDT |
	| start   | -p functional-285400                                                     | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:26 PDT | 28 Apr 24 16:30 PDT |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-285400                                                     | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:30 PDT | 28 Apr 24 16:32 PDT |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache add                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache add                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache add                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache add                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | minikube-local-cache-test:functional-285400                              |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache delete                                           | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | minikube-local-cache-test:functional-285400                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	| ssh     | functional-285400 ssh sudo                                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:33 PDT |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-285400                                                        | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-285400 ssh                                                    | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache reload                                           | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	| ssh     | functional-285400 ssh                                                    | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-285400 kubectl --                                             | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|         | --context functional-285400                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-285400                                                     | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:34 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 16:34:09
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 16:34:09.670409    5336 out.go:291] Setting OutFile to fd 316 ...
	I0428 16:34:09.670409    5336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:34:09.670409    5336 out.go:304] Setting ErrFile to fd 636...
	I0428 16:34:09.670409    5336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:34:09.695367    5336 out.go:298] Setting JSON to false
	I0428 16:34:09.699359    5336 start.go:129] hostinfo: {"hostname":"minikube1","uptime":4692,"bootTime":1714342556,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 16:34:09.699359    5336 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 16:34:09.703419    5336 out.go:177] * [functional-285400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 16:34:09.707192    5336 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 16:34:09.706905    5336 notify.go:220] Checking for updates...
	I0428 16:34:09.714555    5336 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 16:34:09.719346    5336 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 16:34:09.722399    5336 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 16:34:09.724416    5336 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 16:34:09.727217    5336 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 16:34:09.727217    5336 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 16:34:14.821984    5336 out.go:177] * Using the hyperv driver based on existing profile
	I0428 16:34:14.825852    5336 start.go:297] selected driver: hyperv
	I0428 16:34:14.825852    5336 start.go:901] validating driver "hyperv" against &{Name:functional-285400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:functional-285400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.228.231 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 16:34:14.825914    5336 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 16:34:14.877186    5336 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 16:34:14.877773    5336 cni.go:84] Creating CNI manager for ""
	I0428 16:34:14.877917    5336 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0428 16:34:14.878098    5336 start.go:340] cluster config:
	{Name:functional-285400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-285400 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.228.231 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 16:34:14.878098    5336 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 16:34:14.882083    5336 out.go:177] * Starting "functional-285400" primary control-plane node in "functional-285400" cluster
	I0428 16:34:14.884802    5336 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 16:34:14.884802    5336 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 16:34:14.884802    5336 cache.go:56] Caching tarball of preloaded images
	I0428 16:34:14.884802    5336 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 16:34:14.885324    5336 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 16:34:14.885500    5336 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\config.json ...
	I0428 16:34:14.887771    5336 start.go:360] acquireMachinesLock for functional-285400: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 16:34:14.887771    5336 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-285400"
	I0428 16:34:14.887771    5336 start.go:96] Skipping create...Using existing machine configuration
	I0428 16:34:14.887771    5336 fix.go:54] fixHost starting: 
	I0428 16:34:14.888549    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:17.431511    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:17.431511    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:17.431511    5336 fix.go:112] recreateIfNeeded on functional-285400: state=Running err=<nil>
	W0428 16:34:17.437981    5336 fix.go:138] unexpected machine state, will restart: <nil>
	I0428 16:34:17.441792    5336 out.go:177] * Updating the running hyperv "functional-285400" VM ...
	I0428 16:34:17.444246    5336 machine.go:94] provisionDockerMachine start ...
	I0428 16:34:17.444246    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:19.431599    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:19.431599    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:19.431891    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:21.861845    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:21.861845    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:21.867855    5336 main.go:141] libmachine: Using SSH client type: native
	I0428 16:34:21.867855    5336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:34:21.867855    5336 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 16:34:22.006056    5336 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-285400
	
	I0428 16:34:22.006128    5336 buildroot.go:166] provisioning hostname "functional-285400"
	I0428 16:34:22.006128    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:23.961102    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:23.961102    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:23.961397    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:26.353179    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:26.353179    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:26.363454    5336 main.go:141] libmachine: Using SSH client type: native
	I0428 16:34:26.364144    5336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:34:26.364144    5336 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-285400 && echo "functional-285400" | sudo tee /etc/hostname
	I0428 16:34:26.524510    5336 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-285400
	
	I0428 16:34:26.524510    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:28.459766    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:28.459766    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:28.470205    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:30.843432    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:30.843432    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:30.849921    5336 main.go:141] libmachine: Using SSH client type: native
	I0428 16:34:30.850454    5336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:34:30.850580    5336 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-285400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-285400/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-285400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 16:34:30.985183    5336 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 16:34:30.985183    5336 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 16:34:30.985183    5336 buildroot.go:174] setting up certificates
	I0428 16:34:30.985183    5336 provision.go:84] configureAuth start
	I0428 16:34:30.985366    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:32.933174    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:32.933174    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:32.933364    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:35.338190    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:35.338190    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:35.338190    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:37.278681    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:37.278681    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:37.290308    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:39.669808    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:39.669808    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:39.669808    5336 provision.go:143] copyHostCerts
	I0428 16:34:39.670655    5336 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 16:34:39.670729    5336 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 16:34:39.671276    5336 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 16:34:39.672614    5336 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 16:34:39.672614    5336 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 16:34:39.672928    5336 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 16:34:39.674444    5336 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 16:34:39.674444    5336 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 16:34:39.674551    5336 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 16:34:39.675253    5336 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-285400 san=[127.0.0.1 172.27.228.231 functional-285400 localhost minikube]
	I0428 16:34:40.042064    5336 provision.go:177] copyRemoteCerts
	I0428 16:34:40.052285    5336 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 16:34:40.052285    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:42.018235    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:42.018235    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:42.018235    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:44.405839    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:44.405839    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:44.412582    5336 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
	I0428 16:34:44.521402    5336 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4690045s)
	I0428 16:34:44.521539    5336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0428 16:34:44.571688    5336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0428 16:34:44.628076    5336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 16:34:44.674472    5336 provision.go:87] duration metric: took 13.6892697s to configureAuth
	I0428 16:34:44.677687    5336 buildroot.go:189] setting minikube options for container-runtime
	I0428 16:34:44.678509    5336 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 16:34:44.678585    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:46.606247    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:46.606247    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:46.612287    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:48.955407    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:48.955407    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:48.968494    5336 main.go:141] libmachine: Using SSH client type: native
	I0428 16:34:48.969208    5336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:34:48.969208    5336 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 16:34:49.105898    5336 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 16:34:49.105898    5336 buildroot.go:70] root file system type: tmpfs
	I0428 16:34:49.105898    5336 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 16:34:49.105898    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:51.034898    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:51.034898    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:51.034898    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:53.388561    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:53.388561    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:53.402170    5336 main.go:141] libmachine: Using SSH client type: native
	I0428 16:34:53.402794    5336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:34:53.402794    5336 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 16:34:53.554742    5336 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 16:34:53.554742    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:55.516775    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:55.516775    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:55.516967    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:57.909775    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:57.909775    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:57.917785    5336 main.go:141] libmachine: Using SSH client type: native
	I0428 16:34:57.918398    5336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:34:57.918398    5336 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 16:34:58.063352    5336 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 16:34:58.063352    5336 machine.go:97] duration metric: took 40.6190494s to provisionDockerMachine
	I0428 16:34:58.063352    5336 start.go:293] postStartSetup for "functional-285400" (driver="hyperv")
	I0428 16:34:58.063352    5336 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 16:34:58.076762    5336 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 16:34:58.076762    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:35:00.026456    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:35:00.038833    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:00.038833    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:35:02.457605    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:35:02.463234    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:02.463389    5336 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
	I0428 16:35:02.574524    5336 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4967346s)
	I0428 16:35:02.587930    5336 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 16:35:02.596801    5336 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 16:35:02.596868    5336 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 16:35:02.597417    5336 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 16:35:02.598245    5336 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 16:35:02.598940    5336 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\3228\hosts -> hosts in /etc/test/nested/copy/3228
	I0428 16:35:02.611700    5336 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/3228
	I0428 16:35:02.630107    5336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 16:35:02.681414    5336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\3228\hosts --> /etc/test/nested/copy/3228/hosts (40 bytes)
	I0428 16:35:02.727129    5336 start.go:296] duration metric: took 4.6637705s for postStartSetup
	I0428 16:35:02.727129    5336 fix.go:56] duration metric: took 47.8392911s for fixHost
	I0428 16:35:02.727129    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:35:04.697058    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:35:04.697058    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:04.697058    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:35:07.133015    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:35:07.133015    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:07.152541    5336 main.go:141] libmachine: Using SSH client type: native
	I0428 16:35:07.153127    5336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:35:07.153127    5336 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 16:35:07.289188    5336 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714347307.289563241
	
	I0428 16:35:07.289188    5336 fix.go:216] guest clock: 1714347307.289563241
	I0428 16:35:07.289188    5336 fix.go:229] Guest: 2024-04-28 16:35:07.289563241 -0700 PDT Remote: 2024-04-28 16:35:02.7271293 -0700 PDT m=+53.169552901 (delta=4.562433941s)
	I0428 16:35:07.289188    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:35:09.258292    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:35:09.273220    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:09.273220    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:35:11.685280    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:35:11.685280    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:11.691223    5336 main.go:141] libmachine: Using SSH client type: native
	I0428 16:35:11.691882    5336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:35:11.691882    5336 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714347307
	I0428 16:35:11.836481    5336 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 28 23:35:07 UTC 2024
	
	I0428 16:35:11.836512    5336 fix.go:236] clock set: Sun Apr 28 23:35:07 UTC 2024
	 (err=<nil>)
	I0428 16:35:11.836512    5336 start.go:83] releasing machines lock for "functional-285400", held for 56.9486611s
	I0428 16:35:11.836512    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:35:13.833804    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:35:13.833804    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:13.833804    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:35:16.284930    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:35:16.284930    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:16.289975    5336 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 16:35:16.289975    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:35:16.300859    5336 ssh_runner.go:195] Run: cat /version.json
	I0428 16:35:16.300859    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:35:18.303877    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:35:18.303877    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:18.303877    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:35:18.326400    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:35:18.326400    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:18.326457    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:35:20.795217    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:35:20.799973    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:20.799973    5336 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
	I0428 16:35:20.832479    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:35:20.832479    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:20.832479    5336 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
	I0428 16:35:20.954768    5336 ssh_runner.go:235] Completed: cat /version.json: (4.6539027s)
	I0428 16:35:20.954768    5336 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6647867s)
	I0428 16:35:20.968100    5336 ssh_runner.go:195] Run: systemctl --version
	I0428 16:35:20.988621    5336 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 16:35:21.000425    5336 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 16:35:21.012303    5336 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 16:35:21.040383    5336 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0428 16:35:21.040383    5336 start.go:494] detecting cgroup driver to use...
	I0428 16:35:21.040678    5336 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 16:35:21.097415    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 16:35:21.140030    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 16:35:21.161995    5336 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 16:35:21.175661    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 16:35:21.209791    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 16:35:21.246398    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 16:35:21.296864    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 16:35:21.330585    5336 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 16:35:21.364489    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 16:35:21.401931    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 16:35:21.434990    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 16:35:21.475473    5336 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 16:35:21.509506    5336 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 16:35:21.540570    5336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 16:35:21.813872    5336 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 16:35:21.853437    5336 start.go:494] detecting cgroup driver to use...
	I0428 16:35:21.868455    5336 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 16:35:21.907574    5336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 16:35:21.957811    5336 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 16:35:22.001639    5336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 16:35:22.040498    5336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 16:35:22.070226    5336 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 16:35:22.120053    5336 ssh_runner.go:195] Run: which cri-dockerd
	I0428 16:35:22.137240    5336 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 16:35:22.164588    5336 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 16:35:22.208764    5336 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 16:35:22.487639    5336 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 16:35:22.757254    5336 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 16:35:22.757363    5336 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 16:35:22.807324    5336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 16:35:23.095223    5336 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 16:36:34.496078    5336 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4006856s)
	I0428 16:36:34.506523    5336 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0428 16:36:34.606810    5336 out.go:177] 
	W0428 16:36:34.611316    5336 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 28 23:28:08 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	Apr 28 23:28:08 functional-285400 dockerd[669]: time="2024-04-28T23:28:08.458031073Z" level=info msg="Starting up"
	Apr 28 23:28:08 functional-285400 dockerd[669]: time="2024-04-28T23:28:08.459132842Z" level=info msg="containerd not running, starting managed containerd"
	Apr 28 23:28:08 functional-285400 dockerd[669]: time="2024-04-28T23:28:08.460004117Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.500839567Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526294849Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526404946Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526466044Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526481344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526545242Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526674239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526852634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.527002029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.527060828Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.527176124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.527267922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.527554014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.534676013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.534792010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.535161999Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.535266996Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.535432692Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.535495990Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.535511190Z" level=info msg="metadata content store policy set" policy=shared
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562162539Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562292435Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562319034Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562337134Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562354533Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562556428Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563132211Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563340805Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563443403Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563467302Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563484301Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563501301Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563516800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563533200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563560899Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563676996Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563821392Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563843891Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563869391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563885990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563903890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564003687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564039386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564070885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564122983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564137283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564150983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564177082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564191981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564206881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564220081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564238980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564262979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564277079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564291079Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564347177Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564386676Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564401276Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564412475Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564674868Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564695067Z" level=info msg="NRI interface is disabled by configuration."
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.565010258Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.565255551Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.565408647Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.565538844Z" level=info msg="containerd successfully booted in 0.066369s"
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.531334331Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.562701703Z" level=info msg="Loading containers: start."
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.834821291Z" level=info msg="Loading containers: done."
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.861786023Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.862018421Z" level=info msg="Daemon has completed initialization"
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.978533892Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.978695591Z" level=info msg="API listen on [::]:2376"
	Apr 28 23:28:09 functional-285400 systemd[1]: Started Docker Application Container Engine.
	Apr 28 23:28:39 functional-285400 dockerd[669]: time="2024-04-28T23:28:39.460834317Z" level=info msg="Processing signal 'terminated'"
	Apr 28 23:28:39 functional-285400 dockerd[669]: time="2024-04-28T23:28:39.462423349Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 28 23:28:39 functional-285400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 28 23:28:39 functional-285400 dockerd[669]: time="2024-04-28T23:28:39.464231485Z" level=info msg="Daemon shutdown complete"
	Apr 28 23:28:39 functional-285400 dockerd[669]: time="2024-04-28T23:28:39.464287086Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 28 23:28:39 functional-285400 dockerd[669]: time="2024-04-28T23:28:39.464310087Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 28 23:28:40 functional-285400 systemd[1]: docker.service: Deactivated successfully.
	Apr 28 23:28:40 functional-285400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 28 23:28:40 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	Apr 28 23:28:40 functional-285400 dockerd[1034]: time="2024-04-28T23:28:40.547302815Z" level=info msg="Starting up"
	Apr 28 23:28:40 functional-285400 dockerd[1034]: time="2024-04-28T23:28:40.548969049Z" level=info msg="containerd not running, starting managed containerd"
	Apr 28 23:28:40 functional-285400 dockerd[1034]: time="2024-04-28T23:28:40.553355337Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1040
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.586424804Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.617787336Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.617895638Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.617948039Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.617964140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618012441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618030041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618213945Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618305847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618326847Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618337547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618363248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618517051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.621562612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.621828618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622133524Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622226726Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622258026Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622275327Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622292827Z" level=info msg="metadata content store policy set" policy=shared
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622421630Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622472831Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622490631Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622505331Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622519532Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622570833Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623109643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623296347Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623417750Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623440650Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623465851Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623513652Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623546552Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623561353Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623575753Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623589153Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623602153Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623615954Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623636754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623670555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623786457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623809758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623822858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623859659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623874159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623886959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623900059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623917660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623929760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623941360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623955061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623971561Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624098263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624200065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624224266Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624352369Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624423470Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624471871Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624489271Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624582273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624619874Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624633274Z" level=info msg="NRI interface is disabled by configuration."
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.625329088Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.625558393Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.625897400Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.625964801Z" level=info msg="containerd successfully booted in 0.041442s"
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.594527123Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.619764932Z" level=info msg="Loading containers: start."
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.794928563Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.874085758Z" level=info msg="Loading containers: done."
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.898483250Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.898544351Z" level=info msg="Daemon has completed initialization"
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.953742164Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 28 23:28:41 functional-285400 systemd[1]: Started Docker Application Container Engine.
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.955514199Z" level=info msg="API listen on [::]:2376"
	Apr 28 23:28:50 functional-285400 dockerd[1034]: time="2024-04-28T23:28:50.694528243Z" level=info msg="Processing signal 'terminated'"
	Apr 28 23:28:50 functional-285400 dockerd[1034]: time="2024-04-28T23:28:50.696757388Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 28 23:28:50 functional-285400 dockerd[1034]: time="2024-04-28T23:28:50.697046394Z" level=info msg="Daemon shutdown complete"
	Apr 28 23:28:50 functional-285400 dockerd[1034]: time="2024-04-28T23:28:50.697107895Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 28 23:28:50 functional-285400 dockerd[1034]: time="2024-04-28T23:28:50.697114195Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 28 23:28:50 functional-285400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 28 23:28:51 functional-285400 systemd[1]: docker.service: Deactivated successfully.
	Apr 28 23:28:51 functional-285400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 28 23:28:51 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	Apr 28 23:28:51 functional-285400 dockerd[1338]: time="2024-04-28T23:28:51.790116226Z" level=info msg="Starting up"
	Apr 28 23:28:51 functional-285400 dockerd[1338]: time="2024-04-28T23:28:51.791109646Z" level=info msg="containerd not running, starting managed containerd"
	Apr 28 23:28:51 functional-285400 dockerd[1338]: time="2024-04-28T23:28:51.792225068Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1344
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.825171932Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853462002Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853595705Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853779609Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853806409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853838510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853862510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854053614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854152916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854174617Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854185817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854212017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854337420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857038074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857145077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857304280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857393782Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857423682Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857442283Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857453083Z" level=info msg="metadata content store policy set" policy=shared
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857739389Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857796290Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857815490Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857832190Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857847291Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857899992Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858234699Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858391302Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858411502Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858425702Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858445003Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858461503Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858475403Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858489304Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858522104Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858552805Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858582406Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858612006Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858634407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858830211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858877111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858893812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858909412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858924712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858937713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858969413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859060615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859091916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859106816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859121016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859135417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859153317Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859178318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859193918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859207518Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859270719Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859290720Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859303420Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859315720Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859393622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859417022Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859428023Z" level=info msg="NRI interface is disabled by configuration."
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859748329Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859907232Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859989034Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.860011234Z" level=info msg="containerd successfully booted in 0.036080s"
	Apr 28 23:28:53 functional-285400 dockerd[1338]: time="2024-04-28T23:28:53.261253278Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 28 23:28:55 functional-285400 dockerd[1338]: time="2024-04-28T23:28:55.954551264Z" level=info msg="Loading containers: start."
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.146101525Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.231907754Z" level=info msg="Loading containers: done."
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.256426148Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.256552251Z" level=info msg="Daemon has completed initialization"
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.302071268Z" level=info msg="API listen on [::]:2376"
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.302246672Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 28 23:28:56 functional-285400 systemd[1]: Started Docker Application Container Engine.
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.250283526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.250430432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.250528135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.252113295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.334584914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.334655716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.334669617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.334815522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.367942175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.368010078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.368026478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.368111581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.404412954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.405670802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.405917811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.406508433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.643175982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.643396891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.643565597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.643948611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.768173509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.768340015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.768361816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.769060742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.899868788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.899974992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.899993793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.901334044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.901512951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.901622555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.901452248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.902130574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:27 functional-285400 dockerd[1344]: time="2024-04-28T23:29:27.735735186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:27 functional-285400 dockerd[1344]: time="2024-04-28T23:29:27.735972992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:27 functional-285400 dockerd[1344]: time="2024-04-28T23:29:27.736000893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:27 functional-285400 dockerd[1344]: time="2024-04-28T23:29:27.736804912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.012009031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.012102533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.012121734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.012333639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.221516592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.221985704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.222033705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.222175808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.989878612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.990062316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.990158018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.990385723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.021060231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.021189529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.021252427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.021897214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.102986348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.103111445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.103127345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.103233143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.635772700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.636219292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.636323190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.636619984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.919962564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.920236658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.920349456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.920534153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:39 functional-285400 dockerd[1338]: time="2024-04-28T23:29:39.294187745Z" level=info msg="ignoring event" container=8d5e97cbfab6ecb55c7862a379fabef8ee4c3bf6b88c924ec55e9d674f8200ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.295781617Z" level=info msg="shim disconnected" id=8d5e97cbfab6ecb55c7862a379fabef8ee4c3bf6b88c924ec55e9d674f8200ff namespace=moby
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.295917214Z" level=warning msg="cleaning up after shim disconnected" id=8d5e97cbfab6ecb55c7862a379fabef8ee4c3bf6b88c924ec55e9d674f8200ff namespace=moby
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.295934314Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:29:39 functional-285400 dockerd[1338]: time="2024-04-28T23:29:39.481116050Z" level=info msg="ignoring event" container=2e56d97fbdc237cc232a1a800036e067a2e3c0003a37c3517318684e03c08b17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.485795968Z" level=info msg="shim disconnected" id=2e56d97fbdc237cc232a1a800036e067a2e3c0003a37c3517318684e03c08b17 namespace=moby
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.486559854Z" level=warning msg="cleaning up after shim disconnected" id=2e56d97fbdc237cc232a1a800036e067a2e3c0003a37c3517318684e03c08b17 namespace=moby
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.486618353Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.050160482Z" level=info msg="Processing signal 'terminated'"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.210355165Z" level=info msg="shim disconnected" id=8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.211313483Z" level=info msg="ignoring event" container=8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.211741190Z" level=warning msg="cleaning up after shim disconnected" id=8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.211776991Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.297470133Z" level=info msg="shim disconnected" id=0df4de5342babdd3ce0b681ffb7e4a6d6754e626768a7d8a9cad7b6e7701d7c6 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.297556235Z" level=warning msg="cleaning up after shim disconnected" id=0df4de5342babdd3ce0b681ffb7e4a6d6754e626768a7d8a9cad7b6e7701d7c6 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.297573535Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.297957642Z" level=info msg="ignoring event" container=0df4de5342babdd3ce0b681ffb7e4a6d6754e626768a7d8a9cad7b6e7701d7c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.329295406Z" level=info msg="ignoring event" container=d60d61f6290488b2f433e5aa8390867f66da81ec16d56be0658ab3b13cf731c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.332690667Z" level=info msg="shim disconnected" id=d60d61f6290488b2f433e5aa8390867f66da81ec16d56be0658ab3b13cf731c5 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.332789669Z" level=warning msg="cleaning up after shim disconnected" id=d60d61f6290488b2f433e5aa8390867f66da81ec16d56be0658ab3b13cf731c5 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.332804069Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.375837044Z" level=info msg="ignoring event" container=917e469fc278599bde79f1d86be05e59228d066ab844f2b1e4ad13463b80726b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.376572557Z" level=info msg="shim disconnected" id=917e469fc278599bde79f1d86be05e59228d066ab844f2b1e4ad13463b80726b namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.376680259Z" level=warning msg="cleaning up after shim disconnected" id=917e469fc278599bde79f1d86be05e59228d066ab844f2b1e4ad13463b80726b namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.376694659Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.405635180Z" level=info msg="ignoring event" container=d4f34492bd3b0f286baea8cb4ac3122ae01bf1b823fa92eda9a29be625347083 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.407489013Z" level=info msg="shim disconnected" id=d4f34492bd3b0f286baea8cb4ac3122ae01bf1b823fa92eda9a29be625347083 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.409482949Z" level=warning msg="cleaning up after shim disconnected" id=d4f34492bd3b0f286baea8cb4ac3122ae01bf1b823fa92eda9a29be625347083 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.410230562Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.414879946Z" level=info msg="shim disconnected" id=86ed10ca148a30c368742748979f60f6a1f0263bdcec99fdd57b3befb7c1b49b namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.423219296Z" level=warning msg="cleaning up after shim disconnected" id=86ed10ca148a30c368742748979f60f6a1f0263bdcec99fdd57b3befb7c1b49b namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.423265697Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.430282023Z" level=info msg="ignoring event" container=86ed10ca148a30c368742748979f60f6a1f0263bdcec99fdd57b3befb7c1b49b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.432475463Z" level=info msg="ignoring event" container=393441639d880519bc5e8a238d8ae1824e6cff15a5fba113ef99db2284388fa2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.432776868Z" level=info msg="ignoring event" container=36a11974a0fdc771c2c02eaaa8a6c463237e87b664be484c510d1923c55c478d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.433309078Z" level=info msg="ignoring event" container=76cb8f18544b6a74ef3a4068db9415d2a18fc06006a5481fa73e3ce4aef9ec60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.433462081Z" level=info msg="ignoring event" container=7c1efde2e1d06cf3cb390f0588f1792d93287331b4b1fefbd47755a9d4591e77 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.433585883Z" level=info msg="ignoring event" container=3291d76a665ca7306204ff005404f611201b95281d170b3d45b03674581cfd93 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.433872288Z" level=info msg="shim disconnected" id=4142c8b3542b7f1679578dfb73963b8b685025bd23d1e59460779c5d9f603275 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.434020091Z" level=warning msg="cleaning up after shim disconnected" id=4142c8b3542b7f1679578dfb73963b8b685025bd23d1e59460779c5d9f603275 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.434216994Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.438849678Z" level=info msg="shim disconnected" id=3291d76a665ca7306204ff005404f611201b95281d170b3d45b03674581cfd93 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.441729729Z" level=info msg="ignoring event" container=4142c8b3542b7f1679578dfb73963b8b685025bd23d1e59460779c5d9f603275 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.441899732Z" level=warning msg="cleaning up after shim disconnected" id=3291d76a665ca7306204ff005404f611201b95281d170b3d45b03674581cfd93 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.442304640Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.423077594Z" level=info msg="shim disconnected" id=36a11974a0fdc771c2c02eaaa8a6c463237e87b664be484c510d1923c55c478d namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.452785628Z" level=warning msg="cleaning up after shim disconnected" id=36a11974a0fdc771c2c02eaaa8a6c463237e87b664be484c510d1923c55c478d namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.452931531Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.441442824Z" level=info msg="shim disconnected" id=76cb8f18544b6a74ef3a4068db9415d2a18fc06006a5481fa73e3ce4aef9ec60 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.459901156Z" level=warning msg="cleaning up after shim disconnected" id=76cb8f18544b6a74ef3a4068db9415d2a18fc06006a5481fa73e3ce4aef9ec60 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.459958557Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.423209896Z" level=info msg="shim disconnected" id=393441639d880519bc5e8a238d8ae1824e6cff15a5fba113ef99db2284388fa2 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.460989076Z" level=warning msg="cleaning up after shim disconnected" id=393441639d880519bc5e8a238d8ae1824e6cff15a5fba113ef99db2284388fa2 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.461061877Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.441488125Z" level=info msg="shim disconnected" id=7c1efde2e1d06cf3cb390f0588f1792d93287331b4b1fefbd47755a9d4591e77 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.471877672Z" level=warning msg="cleaning up after shim disconnected" id=7c1efde2e1d06cf3cb390f0588f1792d93287331b4b1fefbd47755a9d4591e77 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.471899572Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:31 functional-285400 dockerd[1338]: time="2024-04-28T23:31:31.229543896Z" level=info msg="ignoring event" container=cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:31 functional-285400 dockerd[1344]: time="2024-04-28T23:31:31.230517013Z" level=info msg="shim disconnected" id=cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36 namespace=moby
	Apr 28 23:31:31 functional-285400 dockerd[1344]: time="2024-04-28T23:31:31.230888320Z" level=warning msg="cleaning up after shim disconnected" id=cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36 namespace=moby
	Apr 28 23:31:31 functional-285400 dockerd[1344]: time="2024-04-28T23:31:31.231009422Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.148962964Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=e945fb6ccd0b4af579b9a35a822159fe44d4b44c2b60e54871e3c27f61b19127
	Apr 28 23:31:36 functional-285400 dockerd[1344]: time="2024-04-28T23:31:36.189588096Z" level=info msg="shim disconnected" id=e945fb6ccd0b4af579b9a35a822159fe44d4b44c2b60e54871e3c27f61b19127 namespace=moby
	Apr 28 23:31:36 functional-285400 dockerd[1344]: time="2024-04-28T23:31:36.189747061Z" level=warning msg="cleaning up after shim disconnected" id=e945fb6ccd0b4af579b9a35a822159fe44d4b44c2b60e54871e3c27f61b19127 namespace=moby
	Apr 28 23:31:36 functional-285400 dockerd[1344]: time="2024-04-28T23:31:36.189763458Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.190664659Z" level=info msg="ignoring event" container=e945fb6ccd0b4af579b9a35a822159fe44d4b44c2b60e54871e3c27f61b19127 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.261851744Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.262765043Z" level=info msg="Daemon shutdown complete"
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.262851424Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.262896614Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 28 23:31:37 functional-285400 systemd[1]: docker.service: Deactivated successfully.
	Apr 28 23:31:37 functional-285400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 28 23:31:37 functional-285400 systemd[1]: docker.service: Consumed 6.131s CPU time.
	Apr 28 23:31:37 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	Apr 28 23:31:37 functional-285400 dockerd[4241]: time="2024-04-28T23:31:37.352537531Z" level=info msg="Starting up"
	Apr 28 23:31:37 functional-285400 dockerd[4241]: time="2024-04-28T23:31:37.353712889Z" level=info msg="containerd not running, starting managed containerd"
	Apr 28 23:31:37 functional-285400 dockerd[4241]: time="2024-04-28T23:31:37.357319447Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4247
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.401843683Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430334119Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430470591Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430538977Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430556673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430586367Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430599864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430810221Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430966789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430990084Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.431001881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.431029576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.431299420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.434861687Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435029652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435251907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435493957Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435639227Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435795295Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435927967Z" level=info msg="metadata content store policy set" policy=shared
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436442961Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436693910Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436762096Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436789190Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436805787Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436862575Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.437674608Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.437887164Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.437996642Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438065527Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438082824Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438139412Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438180204Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438201499Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438217796Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438231993Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438245790Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438258588Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438285582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438302879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438448948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438553927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438576522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438592419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438611115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439014032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439231088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439281377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439302773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439322169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439350263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439438745Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439572017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439596512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439611009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439751380Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439853859Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439874255Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439888252Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439970035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440019425Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440035322Z" level=info msg="NRI interface is disabled by configuration."
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440468833Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440766771Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440875149Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440943935Z" level=info msg="containerd successfully booted in 0.040342s"
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.401217622Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.427439091Z" level=info msg="Loading containers: start."
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.712437419Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.790582628Z" level=info msg="Loading containers: done."
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.823815253Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.823979921Z" level=info msg="Daemon has completed initialization"
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.871316341Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.872491515Z" level=info msg="API listen on [::]:2376"
	Apr 28 23:31:38 functional-285400 systemd[1]: Started Docker Application Container Engine.
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.543496994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.544361160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.544512137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.547577062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.572864745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.572969229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.572998724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.573088910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.728876980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.728953068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.728966866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.729073250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.831430795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.831853130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.831959713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.832160482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.856317241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.856679185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.857317886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.859001825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.259745248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.259845434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.259864431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.259971516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.534393076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.534497761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.534513358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.547783046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.595301000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.595477974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.595498771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.595655249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.708389805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.708605074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.708626271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.709006117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.806658846Z" level=info msg="shim disconnected" id=433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.807183471Z" level=warning msg="cleaning up after shim disconnected" id=433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.807200568Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.858415589Z" level=info msg="shim disconnected" id=d944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17 namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.858538671Z" level=warning msg="cleaning up after shim disconnected" id=d944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17 namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.858571867Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4241]: time="2024-04-28T23:31:42.869856441Z" level=info msg="ignoring event" container=9d061e1398da210ccedc55558b9715b005785f793704a3a1963900016b9748a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.902833989Z" level=info msg="shim disconnected" id=9d061e1398da210ccedc55558b9715b005785f793704a3a1963900016b9748a7 namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.902915077Z" level=warning msg="cleaning up after shim disconnected" id=9d061e1398da210ccedc55558b9715b005785f793704a3a1963900016b9748a7 namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.902931675Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4241]: time="2024-04-28T23:31:42.903222133Z" level=info msg="ignoring event" container=cd5d493f46dd815db87370558658470061272f346c4f8aea960387a4269afb1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.923033679Z" level=info msg="shim disconnected" id=cd5d493f46dd815db87370558658470061272f346c4f8aea960387a4269afb1a namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.923142063Z" level=warning msg="cleaning up after shim disconnected" id=cd5d493f46dd815db87370558658470061272f346c4f8aea960387a4269afb1a namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.923163460Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4241]: time="2024-04-28T23:31:42.956354178Z" level=info msg="ignoring event" container=b37acf5d4707644e999567de42840e4212fd14e4bd844ec31b16a59374f992ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.972312578Z" level=info msg="shim disconnected" id=b37acf5d4707644e999567de42840e4212fd14e4bd844ec31b16a59374f992ce namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.972455358Z" level=warning msg="cleaning up after shim disconnected" id=b37acf5d4707644e999567de42840e4212fd14e4bd844ec31b16a59374f992ce namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.972470655Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4241]: time="2024-04-28T23:31:42.983928705Z" level=info msg="ignoring event" container=d57ac6a873278bf29480cd3567e4c210fd7ba99e9fd5a27385a48e16a1d563ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.984146873Z" level=info msg="shim disconnected" id=d57ac6a873278bf29480cd3567e4c210fd7ba99e9fd5a27385a48e16a1d563ba namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.984252758Z" level=warning msg="cleaning up after shim disconnected" id=d57ac6a873278bf29480cd3567e4c210fd7ba99e9fd5a27385a48e16a1d563ba namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.984288153Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.025338783Z" level=warning msg="cleanup warnings time=\"2024-04-28T23:31:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.026537423Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.030181335Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.032231460Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.079937870Z" level=info msg="shim disconnected" id=d09c631e65fbb41cdd3967bc41988c2c732dded5f2bd5a04ce97aeff890200b1 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.080182137Z" level=warning msg="cleaning up after shim disconnected" id=d09c631e65fbb41cdd3967bc41988c2c732dded5f2bd5a04ce97aeff890200b1 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.080205634Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.081005627Z" level=info msg="ignoring event" container=d09c631e65fbb41cdd3967bc41988c2c732dded5f2bd5a04ce97aeff890200b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.085859677Z" level=error msg="Handler for POST /v1.44/containers/433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff/start returned error: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: can't get final child's PID from pipe: EOF: unknown" spanID=8ad30501e61944fe traceID=66f53489c2432281a08c4bc29e4312c9
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.121869753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.122354688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.122524265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.122720539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.147718390Z" level=info msg="ignoring event" container=0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.149400465Z" level=info msg="shim disconnected" id=0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.150239553Z" level=warning msg="cleaning up after shim disconnected" id=0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.150373835Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.171475408Z" level=warning msg="cleanup warnings time=\"2024-04-28T23:31:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.172979107Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.176251668Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.176394949Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.339488902Z" level=info msg="shim disconnected" id=4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.339994035Z" level=warning msg="cleaning up after shim disconnected" id=4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.340155613Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.403635810Z" level=warning msg="cleanup warnings time=\"2024-04-28T23:31:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.404940335Z" level=error msg="copy shim log" error="read /proc/self/fd/54: file already closed" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.407200532Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.407416303Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.971959781Z" level=info msg="shim disconnected" id=9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.972329331Z" level=warning msg="cleaning up after shim disconnected" id=9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.972351428Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.972842463Z" level=info msg="ignoring event" container=9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.642163458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.642367636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.642386434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.643453820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.764353959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.764582435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.764642428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.764854405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.787639863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.787728753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.787754650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.788579862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.808345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.808445832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.808466030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.808586517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.988155667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.988238058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.988266755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.988363045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.184804623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.184871316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.184887815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.185058998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.410994645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.411068838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.411085436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.411219123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.465947784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.468984982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.469353845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.470454036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.898256907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.898472992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.898499590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.898863066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.942176864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.942483443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.942588436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.943029007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.024012602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.024070699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.024082098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.024310184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.421904057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.424619589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.424902972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.425330145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.527027338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.535874777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.535912375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.536023268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.768505131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.768864908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.768896706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.769020698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:33:55 functional-285400 dockerd[4241]: 2024/04/28 23:33:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:55 functional-285400 dockerd[4241]: 2024/04/28 23:33:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:35:23 functional-285400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.123028412Z" level=info msg="Processing signal 'terminated'"
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.345669340Z" level=info msg="ignoring event" container=c5b56189153d37c39b3c0d51303ff00a335ae49fbdf6c42d91e93f9a2c4c8247 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.347673073Z" level=info msg="shim disconnected" id=c5b56189153d37c39b3c0d51303ff00a335ae49fbdf6c42d91e93f9a2c4c8247 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.348070979Z" level=warning msg="cleaning up after shim disconnected" id=c5b56189153d37c39b3c0d51303ff00a335ae49fbdf6c42d91e93f9a2c4c8247 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.348453486Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.352464851Z" level=info msg="ignoring event" container=ad8616b3f34ca3cec0f6ed11ebf1cc497fec4969b58f0b7cbbcb034c4cb10b20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.354749688Z" level=info msg="shim disconnected" id=ad8616b3f34ca3cec0f6ed11ebf1cc497fec4969b58f0b7cbbcb034c4cb10b20 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.354868590Z" level=warning msg="cleaning up after shim disconnected" id=ad8616b3f34ca3cec0f6ed11ebf1cc497fec4969b58f0b7cbbcb034c4cb10b20 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.354929691Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.376243738Z" level=info msg="shim disconnected" id=6bbce171d93135edb19e516531563602e22d6615480e8870ec06606de0414eec namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.377019051Z" level=info msg="ignoring event" container=7fae71c72bf2b1513a7bd3c90c36c9b8a9e51404f82db25e3a4018d8bc43465d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.378375573Z" level=info msg="ignoring event" container=6bbce171d93135edb19e516531563602e22d6615480e8870ec06606de0414eec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.379960199Z" level=warning msg="cleaning up after shim disconnected" id=6bbce171d93135edb19e516531563602e22d6615480e8870ec06606de0414eec namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.377715062Z" level=info msg="shim disconnected" id=7fae71c72bf2b1513a7bd3c90c36c9b8a9e51404f82db25e3a4018d8bc43465d namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.380354305Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.381968332Z" level=info msg="ignoring event" container=a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.382252136Z" level=warning msg="cleaning up after shim disconnected" id=7fae71c72bf2b1513a7bd3c90c36c9b8a9e51404f82db25e3a4018d8bc43465d namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.382450940Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.386605907Z" level=info msg="ignoring event" container=ad1c9573c21797b0ea472d18eeab8bc045d1533420a69ba0bde8f07d3ebbf6ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.382860546Z" level=info msg="shim disconnected" id=a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.387495422Z" level=warning msg="cleaning up after shim disconnected" id=a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.387550123Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.399093511Z" level=info msg="shim disconnected" id=ad1c9573c21797b0ea472d18eeab8bc045d1533420a69ba0bde8f07d3ebbf6ef namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.399158312Z" level=warning msg="cleaning up after shim disconnected" id=ad1c9573c21797b0ea472d18eeab8bc045d1533420a69ba0bde8f07d3ebbf6ef namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.399171812Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.420709563Z" level=info msg="shim disconnected" id=3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.420806765Z" level=info msg="ignoring event" container=d79f63518700b60d650faf6eafaca752f0654a73da372540b9f6449f3446e518 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.420854266Z" level=info msg="ignoring event" container=adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.420887266Z" level=info msg="ignoring event" container=3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.420907066Z" level=info msg="ignoring event" container=ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.421020568Z" level=warning msg="cleaning up after shim disconnected" id=3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.421148570Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.422060685Z" level=info msg="shim disconnected" id=64884080de2ca36c0cdaf609e669a3ad00e1608a76621cf03e2dbca2cbec0712 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.422118786Z" level=warning msg="cleaning up after shim disconnected" id=64884080de2ca36c0cdaf609e669a3ad00e1608a76621cf03e2dbca2cbec0712 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.422131686Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.422961500Z" level=info msg="ignoring event" container=64884080de2ca36c0cdaf609e669a3ad00e1608a76621cf03e2dbca2cbec0712 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.421195971Z" level=info msg="shim disconnected" id=d79f63518700b60d650faf6eafaca752f0654a73da372540b9f6449f3446e518 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.432400254Z" level=warning msg="cleaning up after shim disconnected" id=d79f63518700b60d650faf6eafaca752f0654a73da372540b9f6449f3446e518 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.432526256Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.421224872Z" level=info msg="shim disconnected" id=adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.441078695Z" level=warning msg="cleaning up after shim disconnected" id=adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.441179397Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.453847703Z" level=info msg="shim disconnected" id=68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.454096307Z" level=warning msg="cleaning up after shim disconnected" id=68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.454303211Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.421094069Z" level=info msg="shim disconnected" id=ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.459714899Z" level=info msg="ignoring event" container=68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.469619460Z" level=warning msg="cleaning up after shim disconnected" id=ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.469817163Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:28 functional-285400 dockerd[4247]: time="2024-04-28T23:35:28.211392336Z" level=info msg="shim disconnected" id=2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e namespace=moby
	Apr 28 23:35:28 functional-285400 dockerd[4247]: time="2024-04-28T23:35:28.212023546Z" level=warning msg="cleaning up after shim disconnected" id=2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e namespace=moby
	Apr 28 23:35:28 functional-285400 dockerd[4247]: time="2024-04-28T23:35:28.212157849Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:28 functional-285400 dockerd[4241]: time="2024-04-28T23:35:28.227767503Z" level=info msg="ignoring event" container=2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.248337262Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e
	Apr 28 23:35:33 functional-285400 dockerd[4247]: time="2024-04-28T23:35:33.318093701Z" level=info msg="shim disconnected" id=e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e namespace=moby
	Apr 28 23:35:33 functional-285400 dockerd[4247]: time="2024-04-28T23:35:33.318153701Z" level=warning msg="cleaning up after shim disconnected" id=e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e namespace=moby
	Apr 28 23:35:33 functional-285400 dockerd[4247]: time="2024-04-28T23:35:33.318164501Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.317892002Z" level=info msg="ignoring event" container=e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.381933870Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.383411262Z" level=info msg="Daemon shutdown complete"
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.383553862Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.383583462Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 28 23:35:34 functional-285400 systemd[1]: docker.service: Deactivated successfully.
	Apr 28 23:35:34 functional-285400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 28 23:35:34 functional-285400 systemd[1]: docker.service: Consumed 9.880s CPU time.
	Apr 28 23:35:34 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	Apr 28 23:35:34 functional-285400 dockerd[8393]: time="2024-04-28T23:35:34.465411917Z" level=info msg="Starting up"
	Apr 28 23:36:34 functional-285400 dockerd[8393]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 28 23:36:34 functional-285400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 28 23:36:34 functional-285400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 28 23:36:34 functional-285400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0428 16:36:34.615290    5336 out.go:239] * 
	W0428 16:36:34.616727    5336 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 16:36:34.621712    5336 out.go:177] 
	
	
	==> Docker <==
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="error getting RW layer size for container ID '4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28'"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="error getting RW layer size for container ID '2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e'"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="error getting RW layer size for container ID '3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e'"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="error getting RW layer size for container ID 'adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c'"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="error getting RW layer size for container ID '8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa'"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="error getting RW layer size for container ID '68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399'"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="error getting RW layer size for container ID 'e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e'"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="error getting RW layer size for container ID 'ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438'"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="error getting RW layer size for container ID '433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff'"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="error getting RW layer size for container ID '9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a'"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="error getting RW layer size for container ID 'd944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/d944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:37:34 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:37:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17'"
	Apr 28 23:37:35 functional-285400 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Apr 28 23:37:35 functional-285400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 28 23:37:35 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-28T23:37:37Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +15.453966] systemd-fstab-generator[2367]: Ignoring "noauto" option for root device
	[  +0.220302] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.841050] kauditd_printk_skb: 88 callbacks suppressed
	[Apr28 23:30] kauditd_printk_skb: 10 callbacks suppressed
	[Apr28 23:31] systemd-fstab-generator[3771]: Ignoring "noauto" option for root device
	[  +0.670661] systemd-fstab-generator[3807]: Ignoring "noauto" option for root device
	[  +0.288124] systemd-fstab-generator[3819]: Ignoring "noauto" option for root device
	[  +0.300389] systemd-fstab-generator[3833]: Ignoring "noauto" option for root device
	[  +5.385888] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.904050] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.210204] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.203000] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.314169] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +0.870536] systemd-fstab-generator[4645]: Ignoring "noauto" option for root device
	[  +3.273751] kauditd_printk_skb: 182 callbacks suppressed
	[  +1.459361] systemd-fstab-generator[5366]: Ignoring "noauto" option for root device
	[  +7.557283] kauditd_printk_skb: 53 callbacks suppressed
	[Apr28 23:32] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.770893] systemd-fstab-generator[6392]: Ignoring "noauto" option for root device
	[Apr28 23:35] systemd-fstab-generator[7923]: Ignoring "noauto" option for root device
	[  +0.155288] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.513653] systemd-fstab-generator[7959]: Ignoring "noauto" option for root device
	[  +0.290726] systemd-fstab-generator[7971]: Ignoring "noauto" option for root device
	[  +0.297113] systemd-fstab-generator[7985]: Ignoring "noauto" option for root device
	[  +5.314103] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 23:38:35 up 11 min,  0 users,  load average: 0.01, 0.23, 0.20
	Linux functional-285400 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 28 23:38:27 functional-285400 kubelet[5373]: E0428 23:38:27.299481    5373 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.27.228.231:8441: connect: connection refused" event="&Event{ObjectMeta:{etcd-functional-285400.17ca95ceca63525c  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-functional-285400,UID:35dedd627fdfea3b9aff90de42393f4a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Liveness probe failed: Get \"http://127.0.0.1:2381/health?exclude=NOSPACE&serializable=true\": dial tcp 127.0.0.1:2381: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-285400,},FirstTimestamp:2024-04-28 23:35:23.55920342 +0000 UTC m=+217.970230578,LastTimestamp:2024-04-28 23:35:23.55920342 +0000 UTC m=+217.970230578
,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-285400,}"
	Apr 28 23:38:29 functional-285400 kubelet[5373]: E0428 23:38:29.032191    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?resourceVersion=0&timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:38:29 functional-285400 kubelet[5373]: E0428 23:38:29.033032    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:38:29 functional-285400 kubelet[5373]: E0428 23:38:29.034171    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:38:29 functional-285400 kubelet[5373]: E0428 23:38:29.035243    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:38:29 functional-285400 kubelet[5373]: E0428 23:38:29.036459    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:38:29 functional-285400 kubelet[5373]: E0428 23:38:29.036546    5373 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 28 23:38:30 functional-285400 kubelet[5373]: E0428 23:38:30.816481    5373 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m8.443700145s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 28 23:38:33 functional-285400 kubelet[5373]: E0428 23:38:33.972033    5373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused" interval="7s"
	Apr 28 23:38:35 functional-285400 kubelet[5373]: E0428 23:38:35.129597    5373 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 28 23:38:35 functional-285400 kubelet[5373]: E0428 23:38:35.129736    5373 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:38:35 functional-285400 kubelet[5373]: E0428 23:38:35.130021    5373 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:38:35 functional-285400 kubelet[5373]: E0428 23:38:35.129820    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 28 23:38:35 functional-285400 kubelet[5373]: E0428 23:38:35.130048    5373 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:38:35 functional-285400 kubelet[5373]: E0428 23:38:35.129770    5373 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 28 23:38:35 functional-285400 kubelet[5373]: E0428 23:38:35.130293    5373 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:38:35 functional-285400 kubelet[5373]: I0428 23:38:35.130307    5373 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:38:35 functional-285400 kubelet[5373]: E0428 23:38:35.129864    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 28 23:38:35 functional-285400 kubelet[5373]: E0428 23:38:35.130328    5373 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:38:35 functional-285400 kubelet[5373]: E0428 23:38:35.129885    5373 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 28 23:38:35 functional-285400 kubelet[5373]: E0428 23:38:35.130350    5373 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:38:35 functional-285400 kubelet[5373]: E0428 23:38:35.130366    5373 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:38:35 functional-285400 kubelet[5373]: E0428 23:38:35.131076    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 28 23:38:35 functional-285400 kubelet[5373]: E0428 23:38:35.131114    5373 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 28 23:38:35 functional-285400 kubelet[5373]: E0428 23:38:35.131373    5373 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:36:46.097319    5652 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0428 16:37:34.787300    5652 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:37:34.819995    5652 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:37:34.848703    5652 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:37:34.876823    5652 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:37:34.904310    5652 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:37:34.937771    5652 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:37:34.964162    5652 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:37:34.994113    5652 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400: exit status 2 (11.4685894s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:38:35.910774    9480 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-285400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (277.70s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (181.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-285400 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-285400 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (2.1954076s)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-285400 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-285400 -n functional-285400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-285400 -n functional-285400: exit status 2 (11.3616684s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:38:49.570121    9144 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 logs -n 25
E0428 16:40:36.430471    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 logs -n 25: (2m35.3169621s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-906500 --log_dir                                                  | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:24 PDT | 28 Apr 24 16:24 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-906500 --log_dir                                                  | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:24 PDT | 28 Apr 24 16:24 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-906500 --log_dir                                                  | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:24 PDT | 28 Apr 24 16:24 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-906500 --log_dir                                                  | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:24 PDT | 28 Apr 24 16:25 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-906500 --log_dir                                                  | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:25 PDT | 28 Apr 24 16:25 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-906500 --log_dir                                                  | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:25 PDT | 28 Apr 24 16:25 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-906500                                                         | nospam-906500     | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:25 PDT | 28 Apr 24 16:26 PDT |
	| start   | -p functional-285400                                                     | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:26 PDT | 28 Apr 24 16:30 PDT |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-285400                                                     | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:30 PDT | 28 Apr 24 16:32 PDT |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache add                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache add                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache add                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache add                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | minikube-local-cache-test:functional-285400                              |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache delete                                           | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | minikube-local-cache-test:functional-285400                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:32 PDT |
	| ssh     | functional-285400 ssh sudo                                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:32 PDT | 28 Apr 24 16:33 PDT |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-285400                                                        | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-285400 ssh                                                    | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-285400 cache reload                                           | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	| ssh     | functional-285400 ssh                                                    | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-285400 kubectl --                                             | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|         | --context functional-285400                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-285400                                                     | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:34 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 16:34:09
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 16:34:09.670409    5336 out.go:291] Setting OutFile to fd 316 ...
	I0428 16:34:09.670409    5336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:34:09.670409    5336 out.go:304] Setting ErrFile to fd 636...
	I0428 16:34:09.670409    5336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:34:09.695367    5336 out.go:298] Setting JSON to false
	I0428 16:34:09.699359    5336 start.go:129] hostinfo: {"hostname":"minikube1","uptime":4692,"bootTime":1714342556,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 16:34:09.699359    5336 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 16:34:09.703419    5336 out.go:177] * [functional-285400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 16:34:09.707192    5336 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 16:34:09.706905    5336 notify.go:220] Checking for updates...
	I0428 16:34:09.714555    5336 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 16:34:09.719346    5336 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 16:34:09.722399    5336 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 16:34:09.724416    5336 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 16:34:09.727217    5336 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 16:34:09.727217    5336 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 16:34:14.821984    5336 out.go:177] * Using the hyperv driver based on existing profile
	I0428 16:34:14.825852    5336 start.go:297] selected driver: hyperv
	I0428 16:34:14.825852    5336 start.go:901] validating driver "hyperv" against &{Name:functional-285400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:functional-285400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.228.231 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 16:34:14.825914    5336 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 16:34:14.877186    5336 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 16:34:14.877773    5336 cni.go:84] Creating CNI manager for ""
	I0428 16:34:14.877917    5336 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0428 16:34:14.878098    5336 start.go:340] cluster config:
	{Name:functional-285400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-285400 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.228.231 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 16:34:14.878098    5336 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 16:34:14.882083    5336 out.go:177] * Starting "functional-285400" primary control-plane node in "functional-285400" cluster
	I0428 16:34:14.884802    5336 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 16:34:14.884802    5336 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 16:34:14.884802    5336 cache.go:56] Caching tarball of preloaded images
	I0428 16:34:14.884802    5336 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 16:34:14.885324    5336 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 16:34:14.885500    5336 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\config.json ...
	I0428 16:34:14.887771    5336 start.go:360] acquireMachinesLock for functional-285400: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 16:34:14.887771    5336 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-285400"
	I0428 16:34:14.887771    5336 start.go:96] Skipping create...Using existing machine configuration
	I0428 16:34:14.887771    5336 fix.go:54] fixHost starting: 
	I0428 16:34:14.888549    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:17.431511    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:17.431511    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:17.431511    5336 fix.go:112] recreateIfNeeded on functional-285400: state=Running err=<nil>
	W0428 16:34:17.437981    5336 fix.go:138] unexpected machine state, will restart: <nil>
	I0428 16:34:17.441792    5336 out.go:177] * Updating the running hyperv "functional-285400" VM ...
	I0428 16:34:17.444246    5336 machine.go:94] provisionDockerMachine start ...
	I0428 16:34:17.444246    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:19.431599    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:19.431599    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:19.431891    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:21.861845    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:21.861845    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:21.867855    5336 main.go:141] libmachine: Using SSH client type: native
	I0428 16:34:21.867855    5336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:34:21.867855    5336 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 16:34:22.006056    5336 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-285400
	
	I0428 16:34:22.006128    5336 buildroot.go:166] provisioning hostname "functional-285400"
	I0428 16:34:22.006128    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:23.961102    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:23.961102    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:23.961397    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:26.353179    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:26.353179    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:26.363454    5336 main.go:141] libmachine: Using SSH client type: native
	I0428 16:34:26.364144    5336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:34:26.364144    5336 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-285400 && echo "functional-285400" | sudo tee /etc/hostname
	I0428 16:34:26.524510    5336 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-285400
	
	I0428 16:34:26.524510    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:28.459766    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:28.459766    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:28.470205    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:30.843432    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:30.843432    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:30.849921    5336 main.go:141] libmachine: Using SSH client type: native
	I0428 16:34:30.850454    5336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:34:30.850580    5336 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-285400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-285400/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-285400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 16:34:30.985183    5336 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 16:34:30.985183    5336 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 16:34:30.985183    5336 buildroot.go:174] setting up certificates
	I0428 16:34:30.985183    5336 provision.go:84] configureAuth start
	I0428 16:34:30.985366    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:32.933174    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:32.933174    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:32.933364    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:35.338190    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:35.338190    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:35.338190    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:37.278681    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:37.278681    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:37.290308    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:39.669808    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:39.669808    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:39.669808    5336 provision.go:143] copyHostCerts
	I0428 16:34:39.670655    5336 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 16:34:39.670729    5336 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 16:34:39.671276    5336 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 16:34:39.672614    5336 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 16:34:39.672614    5336 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 16:34:39.672928    5336 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 16:34:39.674444    5336 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 16:34:39.674444    5336 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 16:34:39.674551    5336 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 16:34:39.675253    5336 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-285400 san=[127.0.0.1 172.27.228.231 functional-285400 localhost minikube]
	I0428 16:34:40.042064    5336 provision.go:177] copyRemoteCerts
	I0428 16:34:40.052285    5336 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 16:34:40.052285    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:42.018235    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:42.018235    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:42.018235    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:44.405839    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:44.405839    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:44.412582    5336 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
	I0428 16:34:44.521402    5336 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4690045s)
	I0428 16:34:44.521539    5336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0428 16:34:44.571688    5336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0428 16:34:44.628076    5336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 16:34:44.674472    5336 provision.go:87] duration metric: took 13.6892697s to configureAuth
	I0428 16:34:44.677687    5336 buildroot.go:189] setting minikube options for container-runtime
	I0428 16:34:44.678509    5336 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 16:34:44.678585    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:46.606247    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:46.606247    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:46.612287    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:48.955407    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:48.955407    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:48.968494    5336 main.go:141] libmachine: Using SSH client type: native
	I0428 16:34:48.969208    5336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:34:48.969208    5336 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 16:34:49.105898    5336 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 16:34:49.105898    5336 buildroot.go:70] root file system type: tmpfs
	I0428 16:34:49.105898    5336 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 16:34:49.105898    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:51.034898    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:51.034898    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:51.034898    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:53.388561    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:53.388561    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:53.402170    5336 main.go:141] libmachine: Using SSH client type: native
	I0428 16:34:53.402794    5336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:34:53.402794    5336 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 16:34:53.554742    5336 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 16:34:53.554742    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:34:55.516775    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:34:55.516775    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:55.516967    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:34:57.909775    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:34:57.909775    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:34:57.917785    5336 main.go:141] libmachine: Using SSH client type: native
	I0428 16:34:57.918398    5336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:34:57.918398    5336 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 16:34:58.063352    5336 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 16:34:58.063352    5336 machine.go:97] duration metric: took 40.6190494s to provisionDockerMachine
	I0428 16:34:58.063352    5336 start.go:293] postStartSetup for "functional-285400" (driver="hyperv")
	I0428 16:34:58.063352    5336 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 16:34:58.076762    5336 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 16:34:58.076762    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:35:00.026456    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:35:00.038833    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:00.038833    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:35:02.457605    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:35:02.463234    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:02.463389    5336 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
	I0428 16:35:02.574524    5336 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4967346s)
	I0428 16:35:02.587930    5336 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 16:35:02.596801    5336 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 16:35:02.596868    5336 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 16:35:02.597417    5336 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 16:35:02.598245    5336 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 16:35:02.598940    5336 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\3228\hosts -> hosts in /etc/test/nested/copy/3228
	I0428 16:35:02.611700    5336 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/3228
	I0428 16:35:02.630107    5336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 16:35:02.681414    5336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\3228\hosts --> /etc/test/nested/copy/3228/hosts (40 bytes)
	I0428 16:35:02.727129    5336 start.go:296] duration metric: took 4.6637705s for postStartSetup
	I0428 16:35:02.727129    5336 fix.go:56] duration metric: took 47.8392911s for fixHost
	I0428 16:35:02.727129    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:35:04.697058    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:35:04.697058    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:04.697058    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:35:07.133015    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:35:07.133015    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:07.152541    5336 main.go:141] libmachine: Using SSH client type: native
	I0428 16:35:07.153127    5336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:35:07.153127    5336 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 16:35:07.289188    5336 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714347307.289563241
	
	I0428 16:35:07.289188    5336 fix.go:216] guest clock: 1714347307.289563241
	I0428 16:35:07.289188    5336 fix.go:229] Guest: 2024-04-28 16:35:07.289563241 -0700 PDT Remote: 2024-04-28 16:35:02.7271293 -0700 PDT m=+53.169552901 (delta=4.562433941s)
	I0428 16:35:07.289188    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:35:09.258292    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:35:09.273220    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:09.273220    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:35:11.685280    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:35:11.685280    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:11.691223    5336 main.go:141] libmachine: Using SSH client type: native
	I0428 16:35:11.691882    5336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.228.231 22 <nil> <nil>}
	I0428 16:35:11.691882    5336 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714347307
	I0428 16:35:11.836481    5336 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 28 23:35:07 UTC 2024
	
	I0428 16:35:11.836512    5336 fix.go:236] clock set: Sun Apr 28 23:35:07 UTC 2024
	 (err=<nil>)
	I0428 16:35:11.836512    5336 start.go:83] releasing machines lock for "functional-285400", held for 56.9486611s
	I0428 16:35:11.836512    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:35:13.833804    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:35:13.833804    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:13.833804    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:35:16.284930    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:35:16.284930    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:16.289975    5336 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 16:35:16.289975    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:35:16.300859    5336 ssh_runner.go:195] Run: cat /version.json
	I0428 16:35:16.300859    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
	I0428 16:35:18.303877    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:35:18.303877    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:18.303877    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:35:18.326400    5336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 16:35:18.326400    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:18.326457    5336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
	I0428 16:35:20.795217    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:35:20.799973    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:20.799973    5336 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
	I0428 16:35:20.832479    5336 main.go:141] libmachine: [stdout =====>] : 172.27.228.231
	
	I0428 16:35:20.832479    5336 main.go:141] libmachine: [stderr =====>] : 
	I0428 16:35:20.832479    5336 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
	I0428 16:35:20.954768    5336 ssh_runner.go:235] Completed: cat /version.json: (4.6539027s)
	I0428 16:35:20.954768    5336 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6647867s)
	I0428 16:35:20.968100    5336 ssh_runner.go:195] Run: systemctl --version
	I0428 16:35:20.988621    5336 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 16:35:21.000425    5336 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 16:35:21.012303    5336 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 16:35:21.040383    5336 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0428 16:35:21.040383    5336 start.go:494] detecting cgroup driver to use...
	I0428 16:35:21.040678    5336 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 16:35:21.097415    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 16:35:21.140030    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 16:35:21.161995    5336 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 16:35:21.175661    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 16:35:21.209791    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 16:35:21.246398    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 16:35:21.296864    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 16:35:21.330585    5336 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 16:35:21.364489    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 16:35:21.401931    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 16:35:21.434990    5336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 16:35:21.475473    5336 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 16:35:21.509506    5336 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 16:35:21.540570    5336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 16:35:21.813872    5336 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 16:35:21.853437    5336 start.go:494] detecting cgroup driver to use...
	I0428 16:35:21.868455    5336 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 16:35:21.907574    5336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 16:35:21.957811    5336 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 16:35:22.001639    5336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 16:35:22.040498    5336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 16:35:22.070226    5336 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 16:35:22.120053    5336 ssh_runner.go:195] Run: which cri-dockerd
	I0428 16:35:22.137240    5336 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 16:35:22.164588    5336 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 16:35:22.208764    5336 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 16:35:22.487639    5336 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 16:35:22.757254    5336 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 16:35:22.757363    5336 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 16:35:22.807324    5336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 16:35:23.095223    5336 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 16:36:34.496078    5336 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4006856s)
	I0428 16:36:34.506523    5336 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0428 16:36:34.606810    5336 out.go:177] 
	W0428 16:36:34.611316    5336 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 28 23:28:08 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	Apr 28 23:28:08 functional-285400 dockerd[669]: time="2024-04-28T23:28:08.458031073Z" level=info msg="Starting up"
	Apr 28 23:28:08 functional-285400 dockerd[669]: time="2024-04-28T23:28:08.459132842Z" level=info msg="containerd not running, starting managed containerd"
	Apr 28 23:28:08 functional-285400 dockerd[669]: time="2024-04-28T23:28:08.460004117Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.500839567Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526294849Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526404946Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526466044Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526481344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526545242Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526674239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.526852634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.527002029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.527060828Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.527176124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.527267922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.527554014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.534676013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.534792010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.535161999Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.535266996Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.535432692Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.535495990Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.535511190Z" level=info msg="metadata content store policy set" policy=shared
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562162539Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562292435Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562319034Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562337134Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562354533Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.562556428Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563132211Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563340805Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563443403Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563467302Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563484301Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563501301Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563516800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563533200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563560899Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563676996Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563821392Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563843891Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563869391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563885990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.563903890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564003687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564039386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564070885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564122983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564137283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564150983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564177082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564191981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564206881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564220081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564238980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564262979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564277079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564291079Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564347177Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564386676Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564401276Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564412475Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564537072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564674868Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.564695067Z" level=info msg="NRI interface is disabled by configuration."
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.565010258Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.565255551Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.565408647Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 28 23:28:08 functional-285400 dockerd[675]: time="2024-04-28T23:28:08.565538844Z" level=info msg="containerd successfully booted in 0.066369s"
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.531334331Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.562701703Z" level=info msg="Loading containers: start."
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.834821291Z" level=info msg="Loading containers: done."
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.861786023Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.862018421Z" level=info msg="Daemon has completed initialization"
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.978533892Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 28 23:28:09 functional-285400 dockerd[669]: time="2024-04-28T23:28:09.978695591Z" level=info msg="API listen on [::]:2376"
	Apr 28 23:28:09 functional-285400 systemd[1]: Started Docker Application Container Engine.
	Apr 28 23:28:39 functional-285400 dockerd[669]: time="2024-04-28T23:28:39.460834317Z" level=info msg="Processing signal 'terminated'"
	Apr 28 23:28:39 functional-285400 dockerd[669]: time="2024-04-28T23:28:39.462423349Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 28 23:28:39 functional-285400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 28 23:28:39 functional-285400 dockerd[669]: time="2024-04-28T23:28:39.464231485Z" level=info msg="Daemon shutdown complete"
	Apr 28 23:28:39 functional-285400 dockerd[669]: time="2024-04-28T23:28:39.464287086Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 28 23:28:39 functional-285400 dockerd[669]: time="2024-04-28T23:28:39.464310087Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 28 23:28:40 functional-285400 systemd[1]: docker.service: Deactivated successfully.
	Apr 28 23:28:40 functional-285400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 28 23:28:40 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	Apr 28 23:28:40 functional-285400 dockerd[1034]: time="2024-04-28T23:28:40.547302815Z" level=info msg="Starting up"
	Apr 28 23:28:40 functional-285400 dockerd[1034]: time="2024-04-28T23:28:40.548969049Z" level=info msg="containerd not running, starting managed containerd"
	Apr 28 23:28:40 functional-285400 dockerd[1034]: time="2024-04-28T23:28:40.553355337Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1040
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.586424804Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.617787336Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.617895638Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.617948039Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.617964140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618012441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618030041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618213945Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618305847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618326847Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618337547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618363248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.618517051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.621562612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.621828618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622133524Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622226726Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622258026Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622275327Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622292827Z" level=info msg="metadata content store policy set" policy=shared
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622421630Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622472831Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622490631Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622505331Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622519532Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.622570833Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623109643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623296347Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623417750Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623440650Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623465851Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623513652Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623546552Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623561353Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623575753Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623589153Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623602153Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623615954Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623636754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623670555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623786457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623809758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623822858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623859659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623874159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623886959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623900059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623917660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623929760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623941360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623955061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.623971561Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624098263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624200065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624224266Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624352369Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624423470Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624471871Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624489271Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624582273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624619874Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.624633274Z" level=info msg="NRI interface is disabled by configuration."
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.625329088Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.625558393Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.625897400Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 28 23:28:40 functional-285400 dockerd[1040]: time="2024-04-28T23:28:40.625964801Z" level=info msg="containerd successfully booted in 0.041442s"
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.594527123Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.619764932Z" level=info msg="Loading containers: start."
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.794928563Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.874085758Z" level=info msg="Loading containers: done."
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.898483250Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.898544351Z" level=info msg="Daemon has completed initialization"
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.953742164Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 28 23:28:41 functional-285400 systemd[1]: Started Docker Application Container Engine.
	Apr 28 23:28:41 functional-285400 dockerd[1034]: time="2024-04-28T23:28:41.955514199Z" level=info msg="API listen on [::]:2376"
	Apr 28 23:28:50 functional-285400 dockerd[1034]: time="2024-04-28T23:28:50.694528243Z" level=info msg="Processing signal 'terminated'"
	Apr 28 23:28:50 functional-285400 dockerd[1034]: time="2024-04-28T23:28:50.696757388Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 28 23:28:50 functional-285400 dockerd[1034]: time="2024-04-28T23:28:50.697046394Z" level=info msg="Daemon shutdown complete"
	Apr 28 23:28:50 functional-285400 dockerd[1034]: time="2024-04-28T23:28:50.697107895Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 28 23:28:50 functional-285400 dockerd[1034]: time="2024-04-28T23:28:50.697114195Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 28 23:28:50 functional-285400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 28 23:28:51 functional-285400 systemd[1]: docker.service: Deactivated successfully.
	Apr 28 23:28:51 functional-285400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 28 23:28:51 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	Apr 28 23:28:51 functional-285400 dockerd[1338]: time="2024-04-28T23:28:51.790116226Z" level=info msg="Starting up"
	Apr 28 23:28:51 functional-285400 dockerd[1338]: time="2024-04-28T23:28:51.791109646Z" level=info msg="containerd not running, starting managed containerd"
	Apr 28 23:28:51 functional-285400 dockerd[1338]: time="2024-04-28T23:28:51.792225068Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1344
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.825171932Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853462002Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853595705Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853779609Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853806409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853838510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.853862510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854053614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854152916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854174617Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854185817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854212017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.854337420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857038074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857145077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857304280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857393782Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857423682Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857442283Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857453083Z" level=info msg="metadata content store policy set" policy=shared
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857739389Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857796290Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857815490Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857832190Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857847291Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.857899992Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858234699Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858391302Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858411502Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858425702Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858445003Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858461503Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858475403Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858489304Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858522104Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858552805Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858582406Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858612006Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858634407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858830211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858877111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858893812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858909412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858924712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858937713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.858969413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859060615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859091916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859106816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859121016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859135417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859153317Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859178318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859193918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859207518Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859270719Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859290720Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859303420Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859315720Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859393622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859417022Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859428023Z" level=info msg="NRI interface is disabled by configuration."
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859748329Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859907232Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.859989034Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 28 23:28:51 functional-285400 dockerd[1344]: time="2024-04-28T23:28:51.860011234Z" level=info msg="containerd successfully booted in 0.036080s"
	Apr 28 23:28:53 functional-285400 dockerd[1338]: time="2024-04-28T23:28:53.261253278Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 28 23:28:55 functional-285400 dockerd[1338]: time="2024-04-28T23:28:55.954551264Z" level=info msg="Loading containers: start."
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.146101525Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.231907754Z" level=info msg="Loading containers: done."
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.256426148Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.256552251Z" level=info msg="Daemon has completed initialization"
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.302071268Z" level=info msg="API listen on [::]:2376"
	Apr 28 23:28:56 functional-285400 dockerd[1338]: time="2024-04-28T23:28:56.302246672Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 28 23:28:56 functional-285400 systemd[1]: Started Docker Application Container Engine.
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.250283526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.250430432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.250528135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.252113295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.334584914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.334655716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.334669617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.334815522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.367942175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.368010078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.368026478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.368111581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.404412954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.405670802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.405917811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.406508433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.643175982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.643396891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.643565597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.643948611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.768173509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.768340015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.768361816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.769060742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.899868788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.899974992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.899993793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.901334044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.901512951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.901622555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.901452248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:05 functional-285400 dockerd[1344]: time="2024-04-28T23:29:05.902130574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:27 functional-285400 dockerd[1344]: time="2024-04-28T23:29:27.735735186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:27 functional-285400 dockerd[1344]: time="2024-04-28T23:29:27.735972992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:27 functional-285400 dockerd[1344]: time="2024-04-28T23:29:27.736000893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:27 functional-285400 dockerd[1344]: time="2024-04-28T23:29:27.736804912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.012009031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.012102533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.012121734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.012333639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.221516592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.221985704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.222033705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.222175808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.989878612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.990062316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.990158018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:28 functional-285400 dockerd[1344]: time="2024-04-28T23:29:28.990385723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.021060231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.021189529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.021252427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.021897214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.102986348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.103111445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.103127345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:29 functional-285400 dockerd[1344]: time="2024-04-28T23:29:29.103233143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.635772700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.636219292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.636323190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.636619984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.919962564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.920236658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.920349456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:35 functional-285400 dockerd[1344]: time="2024-04-28T23:29:35.920534153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:29:39 functional-285400 dockerd[1338]: time="2024-04-28T23:29:39.294187745Z" level=info msg="ignoring event" container=8d5e97cbfab6ecb55c7862a379fabef8ee4c3bf6b88c924ec55e9d674f8200ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.295781617Z" level=info msg="shim disconnected" id=8d5e97cbfab6ecb55c7862a379fabef8ee4c3bf6b88c924ec55e9d674f8200ff namespace=moby
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.295917214Z" level=warning msg="cleaning up after shim disconnected" id=8d5e97cbfab6ecb55c7862a379fabef8ee4c3bf6b88c924ec55e9d674f8200ff namespace=moby
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.295934314Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:29:39 functional-285400 dockerd[1338]: time="2024-04-28T23:29:39.481116050Z" level=info msg="ignoring event" container=2e56d97fbdc237cc232a1a800036e067a2e3c0003a37c3517318684e03c08b17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.485795968Z" level=info msg="shim disconnected" id=2e56d97fbdc237cc232a1a800036e067a2e3c0003a37c3517318684e03c08b17 namespace=moby
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.486559854Z" level=warning msg="cleaning up after shim disconnected" id=2e56d97fbdc237cc232a1a800036e067a2e3c0003a37c3517318684e03c08b17 namespace=moby
	Apr 28 23:29:39 functional-285400 dockerd[1344]: time="2024-04-28T23:29:39.486618353Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.050160482Z" level=info msg="Processing signal 'terminated'"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.210355165Z" level=info msg="shim disconnected" id=8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.211313483Z" level=info msg="ignoring event" container=8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.211741190Z" level=warning msg="cleaning up after shim disconnected" id=8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.211776991Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.297470133Z" level=info msg="shim disconnected" id=0df4de5342babdd3ce0b681ffb7e4a6d6754e626768a7d8a9cad7b6e7701d7c6 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.297556235Z" level=warning msg="cleaning up after shim disconnected" id=0df4de5342babdd3ce0b681ffb7e4a6d6754e626768a7d8a9cad7b6e7701d7c6 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.297573535Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.297957642Z" level=info msg="ignoring event" container=0df4de5342babdd3ce0b681ffb7e4a6d6754e626768a7d8a9cad7b6e7701d7c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.329295406Z" level=info msg="ignoring event" container=d60d61f6290488b2f433e5aa8390867f66da81ec16d56be0658ab3b13cf731c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.332690667Z" level=info msg="shim disconnected" id=d60d61f6290488b2f433e5aa8390867f66da81ec16d56be0658ab3b13cf731c5 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.332789669Z" level=warning msg="cleaning up after shim disconnected" id=d60d61f6290488b2f433e5aa8390867f66da81ec16d56be0658ab3b13cf731c5 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.332804069Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.375837044Z" level=info msg="ignoring event" container=917e469fc278599bde79f1d86be05e59228d066ab844f2b1e4ad13463b80726b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.376572557Z" level=info msg="shim disconnected" id=917e469fc278599bde79f1d86be05e59228d066ab844f2b1e4ad13463b80726b namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.376680259Z" level=warning msg="cleaning up after shim disconnected" id=917e469fc278599bde79f1d86be05e59228d066ab844f2b1e4ad13463b80726b namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.376694659Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.405635180Z" level=info msg="ignoring event" container=d4f34492bd3b0f286baea8cb4ac3122ae01bf1b823fa92eda9a29be625347083 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.407489013Z" level=info msg="shim disconnected" id=d4f34492bd3b0f286baea8cb4ac3122ae01bf1b823fa92eda9a29be625347083 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.409482949Z" level=warning msg="cleaning up after shim disconnected" id=d4f34492bd3b0f286baea8cb4ac3122ae01bf1b823fa92eda9a29be625347083 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.410230562Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.414879946Z" level=info msg="shim disconnected" id=86ed10ca148a30c368742748979f60f6a1f0263bdcec99fdd57b3befb7c1b49b namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.423219296Z" level=warning msg="cleaning up after shim disconnected" id=86ed10ca148a30c368742748979f60f6a1f0263bdcec99fdd57b3befb7c1b49b namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.423265697Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.430282023Z" level=info msg="ignoring event" container=86ed10ca148a30c368742748979f60f6a1f0263bdcec99fdd57b3befb7c1b49b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.432475463Z" level=info msg="ignoring event" container=393441639d880519bc5e8a238d8ae1824e6cff15a5fba113ef99db2284388fa2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.432776868Z" level=info msg="ignoring event" container=36a11974a0fdc771c2c02eaaa8a6c463237e87b664be484c510d1923c55c478d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.433309078Z" level=info msg="ignoring event" container=76cb8f18544b6a74ef3a4068db9415d2a18fc06006a5481fa73e3ce4aef9ec60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.433462081Z" level=info msg="ignoring event" container=7c1efde2e1d06cf3cb390f0588f1792d93287331b4b1fefbd47755a9d4591e77 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.433585883Z" level=info msg="ignoring event" container=3291d76a665ca7306204ff005404f611201b95281d170b3d45b03674581cfd93 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.433872288Z" level=info msg="shim disconnected" id=4142c8b3542b7f1679578dfb73963b8b685025bd23d1e59460779c5d9f603275 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.434020091Z" level=warning msg="cleaning up after shim disconnected" id=4142c8b3542b7f1679578dfb73963b8b685025bd23d1e59460779c5d9f603275 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.434216994Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.438849678Z" level=info msg="shim disconnected" id=3291d76a665ca7306204ff005404f611201b95281d170b3d45b03674581cfd93 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1338]: time="2024-04-28T23:31:26.441729729Z" level=info msg="ignoring event" container=4142c8b3542b7f1679578dfb73963b8b685025bd23d1e59460779c5d9f603275 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.441899732Z" level=warning msg="cleaning up after shim disconnected" id=3291d76a665ca7306204ff005404f611201b95281d170b3d45b03674581cfd93 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.442304640Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.423077594Z" level=info msg="shim disconnected" id=36a11974a0fdc771c2c02eaaa8a6c463237e87b664be484c510d1923c55c478d namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.452785628Z" level=warning msg="cleaning up after shim disconnected" id=36a11974a0fdc771c2c02eaaa8a6c463237e87b664be484c510d1923c55c478d namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.452931531Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.441442824Z" level=info msg="shim disconnected" id=76cb8f18544b6a74ef3a4068db9415d2a18fc06006a5481fa73e3ce4aef9ec60 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.459901156Z" level=warning msg="cleaning up after shim disconnected" id=76cb8f18544b6a74ef3a4068db9415d2a18fc06006a5481fa73e3ce4aef9ec60 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.459958557Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.423209896Z" level=info msg="shim disconnected" id=393441639d880519bc5e8a238d8ae1824e6cff15a5fba113ef99db2284388fa2 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.460989076Z" level=warning msg="cleaning up after shim disconnected" id=393441639d880519bc5e8a238d8ae1824e6cff15a5fba113ef99db2284388fa2 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.461061877Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.441488125Z" level=info msg="shim disconnected" id=7c1efde2e1d06cf3cb390f0588f1792d93287331b4b1fefbd47755a9d4591e77 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.471877672Z" level=warning msg="cleaning up after shim disconnected" id=7c1efde2e1d06cf3cb390f0588f1792d93287331b4b1fefbd47755a9d4591e77 namespace=moby
	Apr 28 23:31:26 functional-285400 dockerd[1344]: time="2024-04-28T23:31:26.471899572Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:31 functional-285400 dockerd[1338]: time="2024-04-28T23:31:31.229543896Z" level=info msg="ignoring event" container=cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:31 functional-285400 dockerd[1344]: time="2024-04-28T23:31:31.230517013Z" level=info msg="shim disconnected" id=cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36 namespace=moby
	Apr 28 23:31:31 functional-285400 dockerd[1344]: time="2024-04-28T23:31:31.230888320Z" level=warning msg="cleaning up after shim disconnected" id=cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36 namespace=moby
	Apr 28 23:31:31 functional-285400 dockerd[1344]: time="2024-04-28T23:31:31.231009422Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.148962964Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=e945fb6ccd0b4af579b9a35a822159fe44d4b44c2b60e54871e3c27f61b19127
	Apr 28 23:31:36 functional-285400 dockerd[1344]: time="2024-04-28T23:31:36.189588096Z" level=info msg="shim disconnected" id=e945fb6ccd0b4af579b9a35a822159fe44d4b44c2b60e54871e3c27f61b19127 namespace=moby
	Apr 28 23:31:36 functional-285400 dockerd[1344]: time="2024-04-28T23:31:36.189747061Z" level=warning msg="cleaning up after shim disconnected" id=e945fb6ccd0b4af579b9a35a822159fe44d4b44c2b60e54871e3c27f61b19127 namespace=moby
	Apr 28 23:31:36 functional-285400 dockerd[1344]: time="2024-04-28T23:31:36.189763458Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.190664659Z" level=info msg="ignoring event" container=e945fb6ccd0b4af579b9a35a822159fe44d4b44c2b60e54871e3c27f61b19127 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.261851744Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.262765043Z" level=info msg="Daemon shutdown complete"
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.262851424Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 28 23:31:36 functional-285400 dockerd[1338]: time="2024-04-28T23:31:36.262896614Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 28 23:31:37 functional-285400 systemd[1]: docker.service: Deactivated successfully.
	Apr 28 23:31:37 functional-285400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 28 23:31:37 functional-285400 systemd[1]: docker.service: Consumed 6.131s CPU time.
	Apr 28 23:31:37 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	Apr 28 23:31:37 functional-285400 dockerd[4241]: time="2024-04-28T23:31:37.352537531Z" level=info msg="Starting up"
	Apr 28 23:31:37 functional-285400 dockerd[4241]: time="2024-04-28T23:31:37.353712889Z" level=info msg="containerd not running, starting managed containerd"
	Apr 28 23:31:37 functional-285400 dockerd[4241]: time="2024-04-28T23:31:37.357319447Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4247
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.401843683Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430334119Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430470591Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430538977Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430556673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430586367Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430599864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430810221Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430966789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.430990084Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.431001881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.431029576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.431299420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.434861687Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435029652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435251907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435493957Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435639227Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435795295Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.435927967Z" level=info msg="metadata content store policy set" policy=shared
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436442961Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436693910Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436762096Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436789190Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436805787Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.436862575Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.437674608Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.437887164Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.437996642Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438065527Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438082824Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438139412Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438180204Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438201499Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438217796Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438231993Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438245790Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438258588Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438285582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438302879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438448948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438553927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438576522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438592419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.438611115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439014032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439231088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439281377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439302773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439322169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439350263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439438745Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439572017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439596512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439611009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439751380Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439853859Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439874255Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439888252Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.439970035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440019425Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440035322Z" level=info msg="NRI interface is disabled by configuration."
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440468833Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440766771Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440875149Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 28 23:31:37 functional-285400 dockerd[4247]: time="2024-04-28T23:31:37.440943935Z" level=info msg="containerd successfully booted in 0.040342s"
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.401217622Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.427439091Z" level=info msg="Loading containers: start."
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.712437419Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.790582628Z" level=info msg="Loading containers: done."
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.823815253Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.823979921Z" level=info msg="Daemon has completed initialization"
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.871316341Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 28 23:31:38 functional-285400 dockerd[4241]: time="2024-04-28T23:31:38.872491515Z" level=info msg="API listen on [::]:2376"
	Apr 28 23:31:38 functional-285400 systemd[1]: Started Docker Application Container Engine.
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.543496994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.544361160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.544512137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.547577062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.572864745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.572969229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.572998724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.573088910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.728876980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.728953068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.728966866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.729073250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.831430795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.831853130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.831959713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.832160482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.856317241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.856679185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.857317886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:41 functional-285400 dockerd[4247]: time="2024-04-28T23:31:41.859001825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.259745248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.259845434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.259864431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.259971516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.534393076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.534497761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.534513358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.547783046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.595301000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.595477974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.595498771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.595655249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.708389805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.708605074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.708626271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.709006117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.806658846Z" level=info msg="shim disconnected" id=433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.807183471Z" level=warning msg="cleaning up after shim disconnected" id=433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.807200568Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.858415589Z" level=info msg="shim disconnected" id=d944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17 namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.858538671Z" level=warning msg="cleaning up after shim disconnected" id=d944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17 namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.858571867Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4241]: time="2024-04-28T23:31:42.869856441Z" level=info msg="ignoring event" container=9d061e1398da210ccedc55558b9715b005785f793704a3a1963900016b9748a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.902833989Z" level=info msg="shim disconnected" id=9d061e1398da210ccedc55558b9715b005785f793704a3a1963900016b9748a7 namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.902915077Z" level=warning msg="cleaning up after shim disconnected" id=9d061e1398da210ccedc55558b9715b005785f793704a3a1963900016b9748a7 namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.902931675Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4241]: time="2024-04-28T23:31:42.903222133Z" level=info msg="ignoring event" container=cd5d493f46dd815db87370558658470061272f346c4f8aea960387a4269afb1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.923033679Z" level=info msg="shim disconnected" id=cd5d493f46dd815db87370558658470061272f346c4f8aea960387a4269afb1a namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.923142063Z" level=warning msg="cleaning up after shim disconnected" id=cd5d493f46dd815db87370558658470061272f346c4f8aea960387a4269afb1a namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.923163460Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4241]: time="2024-04-28T23:31:42.956354178Z" level=info msg="ignoring event" container=b37acf5d4707644e999567de42840e4212fd14e4bd844ec31b16a59374f992ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.972312578Z" level=info msg="shim disconnected" id=b37acf5d4707644e999567de42840e4212fd14e4bd844ec31b16a59374f992ce namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.972455358Z" level=warning msg="cleaning up after shim disconnected" id=b37acf5d4707644e999567de42840e4212fd14e4bd844ec31b16a59374f992ce namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.972470655Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4241]: time="2024-04-28T23:31:42.983928705Z" level=info msg="ignoring event" container=d57ac6a873278bf29480cd3567e4c210fd7ba99e9fd5a27385a48e16a1d563ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.984146873Z" level=info msg="shim disconnected" id=d57ac6a873278bf29480cd3567e4c210fd7ba99e9fd5a27385a48e16a1d563ba namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.984252758Z" level=warning msg="cleaning up after shim disconnected" id=d57ac6a873278bf29480cd3567e4c210fd7ba99e9fd5a27385a48e16a1d563ba namespace=moby
	Apr 28 23:31:42 functional-285400 dockerd[4247]: time="2024-04-28T23:31:42.984288153Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.025338783Z" level=warning msg="cleanup warnings time=\"2024-04-28T23:31:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.026537423Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.030181335Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.032231460Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.079937870Z" level=info msg="shim disconnected" id=d09c631e65fbb41cdd3967bc41988c2c732dded5f2bd5a04ce97aeff890200b1 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.080182137Z" level=warning msg="cleaning up after shim disconnected" id=d09c631e65fbb41cdd3967bc41988c2c732dded5f2bd5a04ce97aeff890200b1 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.080205634Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.081005627Z" level=info msg="ignoring event" container=d09c631e65fbb41cdd3967bc41988c2c732dded5f2bd5a04ce97aeff890200b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.085859677Z" level=error msg="Handler for POST /v1.44/containers/433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff/start returned error: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: can't get final child's PID from pipe: EOF: unknown" spanID=8ad30501e61944fe traceID=66f53489c2432281a08c4bc29e4312c9
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.121869753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.122354688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.122524265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.122720539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.147718390Z" level=info msg="ignoring event" container=0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.149400465Z" level=info msg="shim disconnected" id=0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.150239553Z" level=warning msg="cleaning up after shim disconnected" id=0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.150373835Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.171475408Z" level=warning msg="cleanup warnings time=\"2024-04-28T23:31:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.172979107Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.176251668Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.176394949Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.339488902Z" level=info msg="shim disconnected" id=4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.339994035Z" level=warning msg="cleaning up after shim disconnected" id=4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28 namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.340155613Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.403635810Z" level=warning msg="cleanup warnings time=\"2024-04-28T23:31:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.404940335Z" level=error msg="copy shim log" error="read /proc/self/fd/54: file already closed" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.407200532Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.407416303Z" level=error msg="stream copy error: reading from a closed fifo"
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.971959781Z" level=info msg="shim disconnected" id=9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.972329331Z" level=warning msg="cleaning up after shim disconnected" id=9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4247]: time="2024-04-28T23:31:43.972351428Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:31:43 functional-285400 dockerd[4241]: time="2024-04-28T23:31:43.972842463Z" level=info msg="ignoring event" container=9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.642163458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.642367636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.642386434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.643453820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.764353959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.764582435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.764642428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.764854405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.787639863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.787728753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.787754650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.788579862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.808345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.808445832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.808466030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.808586517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.988155667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.988238058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.988266755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:46 functional-285400 dockerd[4247]: time="2024-04-28T23:31:46.988363045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.184804623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.184871316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.184887815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.185058998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.410994645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.411068838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.411085436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.411219123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.465947784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.468984982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.469353845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:47 functional-285400 dockerd[4247]: time="2024-04-28T23:31:47.470454036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.898256907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.898472992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.898499590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.898863066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.942176864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.942483443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.942588436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:52 functional-285400 dockerd[4247]: time="2024-04-28T23:31:52.943029007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.024012602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.024070699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.024082098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.024310184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.421904057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.424619589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.424902972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.425330145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.527027338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.535874777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.535912375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.536023268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.768505131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.768864908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.768896706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:31:53 functional-285400 dockerd[4247]: time="2024-04-28T23:31:53.769020698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 28 23:33:55 functional-285400 dockerd[4241]: 2024/04/28 23:33:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:55 functional-285400 dockerd[4241]: 2024/04/28 23:33:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:33:56 functional-285400 dockerd[4241]: 2024/04/28 23:33:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 28 23:35:23 functional-285400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.123028412Z" level=info msg="Processing signal 'terminated'"
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.345669340Z" level=info msg="ignoring event" container=c5b56189153d37c39b3c0d51303ff00a335ae49fbdf6c42d91e93f9a2c4c8247 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.347673073Z" level=info msg="shim disconnected" id=c5b56189153d37c39b3c0d51303ff00a335ae49fbdf6c42d91e93f9a2c4c8247 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.348070979Z" level=warning msg="cleaning up after shim disconnected" id=c5b56189153d37c39b3c0d51303ff00a335ae49fbdf6c42d91e93f9a2c4c8247 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.348453486Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.352464851Z" level=info msg="ignoring event" container=ad8616b3f34ca3cec0f6ed11ebf1cc497fec4969b58f0b7cbbcb034c4cb10b20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.354749688Z" level=info msg="shim disconnected" id=ad8616b3f34ca3cec0f6ed11ebf1cc497fec4969b58f0b7cbbcb034c4cb10b20 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.354868590Z" level=warning msg="cleaning up after shim disconnected" id=ad8616b3f34ca3cec0f6ed11ebf1cc497fec4969b58f0b7cbbcb034c4cb10b20 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.354929691Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.376243738Z" level=info msg="shim disconnected" id=6bbce171d93135edb19e516531563602e22d6615480e8870ec06606de0414eec namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.377019051Z" level=info msg="ignoring event" container=7fae71c72bf2b1513a7bd3c90c36c9b8a9e51404f82db25e3a4018d8bc43465d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.378375573Z" level=info msg="ignoring event" container=6bbce171d93135edb19e516531563602e22d6615480e8870ec06606de0414eec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.379960199Z" level=warning msg="cleaning up after shim disconnected" id=6bbce171d93135edb19e516531563602e22d6615480e8870ec06606de0414eec namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.377715062Z" level=info msg="shim disconnected" id=7fae71c72bf2b1513a7bd3c90c36c9b8a9e51404f82db25e3a4018d8bc43465d namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.380354305Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.381968332Z" level=info msg="ignoring event" container=a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.382252136Z" level=warning msg="cleaning up after shim disconnected" id=7fae71c72bf2b1513a7bd3c90c36c9b8a9e51404f82db25e3a4018d8bc43465d namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.382450940Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.386605907Z" level=info msg="ignoring event" container=ad1c9573c21797b0ea472d18eeab8bc045d1533420a69ba0bde8f07d3ebbf6ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.382860546Z" level=info msg="shim disconnected" id=a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.387495422Z" level=warning msg="cleaning up after shim disconnected" id=a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.387550123Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.399093511Z" level=info msg="shim disconnected" id=ad1c9573c21797b0ea472d18eeab8bc045d1533420a69ba0bde8f07d3ebbf6ef namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.399158312Z" level=warning msg="cleaning up after shim disconnected" id=ad1c9573c21797b0ea472d18eeab8bc045d1533420a69ba0bde8f07d3ebbf6ef namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.399171812Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.420709563Z" level=info msg="shim disconnected" id=3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.420806765Z" level=info msg="ignoring event" container=d79f63518700b60d650faf6eafaca752f0654a73da372540b9f6449f3446e518 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.420854266Z" level=info msg="ignoring event" container=adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.420887266Z" level=info msg="ignoring event" container=3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.420907066Z" level=info msg="ignoring event" container=ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.421020568Z" level=warning msg="cleaning up after shim disconnected" id=3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.421148570Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.422060685Z" level=info msg="shim disconnected" id=64884080de2ca36c0cdaf609e669a3ad00e1608a76621cf03e2dbca2cbec0712 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.422118786Z" level=warning msg="cleaning up after shim disconnected" id=64884080de2ca36c0cdaf609e669a3ad00e1608a76621cf03e2dbca2cbec0712 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.422131686Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.422961500Z" level=info msg="ignoring event" container=64884080de2ca36c0cdaf609e669a3ad00e1608a76621cf03e2dbca2cbec0712 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.421195971Z" level=info msg="shim disconnected" id=d79f63518700b60d650faf6eafaca752f0654a73da372540b9f6449f3446e518 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.432400254Z" level=warning msg="cleaning up after shim disconnected" id=d79f63518700b60d650faf6eafaca752f0654a73da372540b9f6449f3446e518 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.432526256Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.421224872Z" level=info msg="shim disconnected" id=adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.441078695Z" level=warning msg="cleaning up after shim disconnected" id=adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.441179397Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.453847703Z" level=info msg="shim disconnected" id=68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.454096307Z" level=warning msg="cleaning up after shim disconnected" id=68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.454303211Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.421094069Z" level=info msg="shim disconnected" id=ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4241]: time="2024-04-28T23:35:23.459714899Z" level=info msg="ignoring event" container=68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.469619460Z" level=warning msg="cleaning up after shim disconnected" id=ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438 namespace=moby
	Apr 28 23:35:23 functional-285400 dockerd[4247]: time="2024-04-28T23:35:23.469817163Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:28 functional-285400 dockerd[4247]: time="2024-04-28T23:35:28.211392336Z" level=info msg="shim disconnected" id=2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e namespace=moby
	Apr 28 23:35:28 functional-285400 dockerd[4247]: time="2024-04-28T23:35:28.212023546Z" level=warning msg="cleaning up after shim disconnected" id=2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e namespace=moby
	Apr 28 23:35:28 functional-285400 dockerd[4247]: time="2024-04-28T23:35:28.212157849Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:28 functional-285400 dockerd[4241]: time="2024-04-28T23:35:28.227767503Z" level=info msg="ignoring event" container=2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.248337262Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e
	Apr 28 23:35:33 functional-285400 dockerd[4247]: time="2024-04-28T23:35:33.318093701Z" level=info msg="shim disconnected" id=e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e namespace=moby
	Apr 28 23:35:33 functional-285400 dockerd[4247]: time="2024-04-28T23:35:33.318153701Z" level=warning msg="cleaning up after shim disconnected" id=e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e namespace=moby
	Apr 28 23:35:33 functional-285400 dockerd[4247]: time="2024-04-28T23:35:33.318164501Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.317892002Z" level=info msg="ignoring event" container=e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.381933870Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.383411262Z" level=info msg="Daemon shutdown complete"
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.383553862Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 28 23:35:33 functional-285400 dockerd[4241]: time="2024-04-28T23:35:33.383583462Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 28 23:35:34 functional-285400 systemd[1]: docker.service: Deactivated successfully.
	Apr 28 23:35:34 functional-285400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 28 23:35:34 functional-285400 systemd[1]: docker.service: Consumed 9.880s CPU time.
	Apr 28 23:35:34 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	Apr 28 23:35:34 functional-285400 dockerd[8393]: time="2024-04-28T23:35:34.465411917Z" level=info msg="Starting up"
	Apr 28 23:36:34 functional-285400 dockerd[8393]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 28 23:36:34 functional-285400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 28 23:36:34 functional-285400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 28 23:36:34 functional-285400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0428 16:36:34.615290    5336 out.go:239] * 
	W0428 16:36:34.616727    5336 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 16:36:34.621712    5336 out.go:177] 
	
	
	==> Docker <==
	Apr 28 23:40:35 functional-285400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID '433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff'"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="error getting RW layer size for container ID '3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e'"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="error getting RW layer size for container ID 'e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e'"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="error getting RW layer size for container ID '9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a'"
	Apr 28 23:40:35 functional-285400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="error getting RW layer size for container ID 'd944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/d944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17'"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="error getting RW layer size for container ID 'ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438'"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="error getting RW layer size for container ID '0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID '0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275'"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="error getting RW layer size for container ID 'a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:40:35 functional-285400 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e'"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="error getting RW layer size for container ID '4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28'"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="error getting RW layer size for container ID '68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID '68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399'"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="error getting RW layer size for container ID 'adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:40:35 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:40:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-28T23:40:37Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +15.453966] systemd-fstab-generator[2367]: Ignoring "noauto" option for root device
	[  +0.220302] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.841050] kauditd_printk_skb: 88 callbacks suppressed
	[Apr28 23:30] kauditd_printk_skb: 10 callbacks suppressed
	[Apr28 23:31] systemd-fstab-generator[3771]: Ignoring "noauto" option for root device
	[  +0.670661] systemd-fstab-generator[3807]: Ignoring "noauto" option for root device
	[  +0.288124] systemd-fstab-generator[3819]: Ignoring "noauto" option for root device
	[  +0.300389] systemd-fstab-generator[3833]: Ignoring "noauto" option for root device
	[  +5.385888] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.904050] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.210204] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.203000] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.314169] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +0.870536] systemd-fstab-generator[4645]: Ignoring "noauto" option for root device
	[  +3.273751] kauditd_printk_skb: 182 callbacks suppressed
	[  +1.459361] systemd-fstab-generator[5366]: Ignoring "noauto" option for root device
	[  +7.557283] kauditd_printk_skb: 53 callbacks suppressed
	[Apr28 23:32] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.770893] systemd-fstab-generator[6392]: Ignoring "noauto" option for root device
	[Apr28 23:35] systemd-fstab-generator[7923]: Ignoring "noauto" option for root device
	[  +0.155288] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.513653] systemd-fstab-generator[7959]: Ignoring "noauto" option for root device
	[  +0.290726] systemd-fstab-generator[7971]: Ignoring "noauto" option for root device
	[  +0.297113] systemd-fstab-generator[7985]: Ignoring "noauto" option for root device
	[  +5.314103] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 23:41:36 up 14 min,  0 users,  load average: 0.12, 0.16, 0.18
	Linux functional-285400 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 28 23:41:30 functional-285400 kubelet[5373]: E0428 23:41:30.849184    5373 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 6m8.476393888s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 28 23:41:33 functional-285400 kubelet[5373]: E0428 23:41:33.682682    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?resourceVersion=0&timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:41:33 functional-285400 kubelet[5373]: E0428 23:41:33.683774    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:41:33 functional-285400 kubelet[5373]: E0428 23:41:33.685151    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:41:33 functional-285400 kubelet[5373]: E0428 23:41:33.686500    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:41:33 functional-285400 kubelet[5373]: E0428 23:41:33.687517    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:41:33 functional-285400 kubelet[5373]: E0428 23:41:33.687691    5373 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 28 23:41:35 functional-285400 kubelet[5373]: I0428 23:41:35.842429    5373 status_manager.go:853] "Failed to get status for pod" podUID="f291e154417b21ff4db6980bc8535b89" pod="kube-system/kube-apiserver-functional-285400" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-285400\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:41:35 functional-285400 kubelet[5373]: E0428 23:41:35.849734    5373 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 6m13.47696631s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 28 23:41:35 functional-285400 kubelet[5373]: E0428 23:41:35.864585    5373 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 28 23:41:35 functional-285400 kubelet[5373]: E0428 23:41:35.864722    5373 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:41:35 functional-285400 kubelet[5373]: E0428 23:41:35.864747    5373 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:41:35 functional-285400 kubelet[5373]: E0428 23:41:35.870327    5373 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:41:35 functional-285400 kubelet[5373]: E0428 23:41:35.870423    5373 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:41:35 functional-285400 kubelet[5373]: E0428 23:41:35.870370    5373 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 28 23:41:35 functional-285400 kubelet[5373]: E0428 23:41:35.871265    5373 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:41:35 functional-285400 kubelet[5373]: I0428 23:41:35.871542    5373 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:41:35 functional-285400 kubelet[5373]: E0428 23:41:35.871874    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 28 23:41:35 functional-285400 kubelet[5373]: E0428 23:41:35.872427    5373 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:41:35 functional-285400 kubelet[5373]: E0428 23:41:35.873103    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 28 23:41:35 functional-285400 kubelet[5373]: E0428 23:41:35.873354    5373 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:41:35 functional-285400 kubelet[5373]: E0428 23:41:35.875098    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 28 23:41:35 functional-285400 kubelet[5373]: E0428 23:41:35.875220    5373 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 28 23:41:35 functional-285400 kubelet[5373]: E0428 23:41:35.875858    5373 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Apr 28 23:41:36 functional-285400 kubelet[5373]: E0428 23:41:36.035870    5373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:39:00.931330    7764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0428 16:39:35.399040    7764 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:39:35.431699    7764 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:39:35.462621    7764 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:39:35.492982    7764 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:39:35.527120    7764 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:40:35.635320    7764 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:40:35.667983    7764 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:40:35.698292    7764 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400: exit status 2 (11.7346572s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:41:36.724751    9444 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-285400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (181.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-285400 apply -f testdata\invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-285400 apply -f testdata\invalidsvc.yaml: exit status 1 (4.2517879s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\invalidsvc.yaml": error validating data: failed to download openapi: Get "https://172.27.228.231:8441/openapi/v2?timeout=32s": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-285400 apply -f testdata\invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-285400 config unset cpus" to be -""- but got *"W0428 16:47:42.039844   13980 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-285400 config get cpus: exit status 14 (239.7238ms)

                                                
                                                
** stderr ** 
	W0428 16:47:42.300133   13988 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-285400 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0428 16:47:42.300133   13988 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-285400 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0428 16:47:42.523717    9440 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-285400 config get cpus" to be -""- but got *"W0428 16:47:42.787311    8972 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-285400 config unset cpus" to be -""- but got *"W0428 16:47:43.044553   14132 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-285400 config get cpus: exit status 14 (222.6265ms)

                                                
                                                
** stderr ** 
	W0428 16:47:43.280790   13460 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-285400 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0428 16:47:43.280790   13460 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (188.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-285400 status: exit status 2 (14.4420458s)

                                                
                                                
-- stdout --
	functional-285400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:47:42.037850   11604 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:852: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-285400 status" : exit status 2
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-285400 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (13.0170835s)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:47:56.511504    6484 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-285400 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-285400 status -o json: exit status 2 (14.4818736s)

                                                
                                                
-- stdout --
	{"Name":"functional-285400","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:48:09.528958    1972 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-285400 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-285400 -n functional-285400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-285400 -n functional-285400: exit status 2 (13.1439719s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:48:23.971891    9752 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 logs -n 25: (2m1.7266382s)
helpers_test.go:252: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	|-----------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command  |                                                Args                                                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|-----------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh       | functional-285400 ssh                                                                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT |                     |
	|           | sudo crictl inspecti                                                                                |                   |                   |         |                     |                     |
	|           | registry.k8s.io/pause:latest                                                                        |                   |                   |         |                     |                     |
	| cache     | functional-285400 cache reload                                                                      | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	| ssh       | functional-285400 ssh                                                                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|           | sudo crictl inspecti                                                                                |                   |                   |         |                     |                     |
	|           | registry.k8s.io/pause:latest                                                                        |                   |                   |         |                     |                     |
	| cache     | delete                                                                                              | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|           | registry.k8s.io/pause:3.1                                                                           |                   |                   |         |                     |                     |
	| cache     | delete                                                                                              | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|           | registry.k8s.io/pause:latest                                                                        |                   |                   |         |                     |                     |
	| kubectl   | functional-285400 kubectl --                                                                        | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|           | --context functional-285400                                                                         |                   |                   |         |                     |                     |
	|           | get pods                                                                                            |                   |                   |         |                     |                     |
	| start     | -p functional-285400                                                                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:34 PDT |                     |
	|           | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                            |                   |                   |         |                     |                     |
	|           | --wait=all                                                                                          |                   |                   |         |                     |                     |
	| config    | functional-285400 config unset                                                                      | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT | 28 Apr 24 16:47 PDT |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| cp        | functional-285400 cp                                                                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT | 28 Apr 24 16:47 PDT |
	|           | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| config    | functional-285400 config get                                                                        | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT |                     |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| config    | functional-285400 config set                                                                        | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT | 28 Apr 24 16:47 PDT |
	|           | cpus 2                                                                                              |                   |                   |         |                     |                     |
	| config    | functional-285400 config get                                                                        | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT | 28 Apr 24 16:47 PDT |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| config    | functional-285400 config unset                                                                      | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT | 28 Apr 24 16:47 PDT |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| config    | functional-285400 config get                                                                        | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT |                     |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| start     | -p functional-285400                                                                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT |                     |
	|           | --dry-run --memory                                                                                  |                   |                   |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                             |                   |                   |         |                     |                     |
	|           | --driver=hyperv                                                                                     |                   |                   |         |                     |                     |
	| start     | -p functional-285400                                                                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT |                     |
	|           | --dry-run --memory                                                                                  |                   |                   |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                             |                   |                   |         |                     |                     |
	|           | --driver=hyperv                                                                                     |                   |                   |         |                     |                     |
	| ssh       | functional-285400 ssh -n                                                                            | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT | 28 Apr 24 16:48 PDT |
	|           | functional-285400 sudo cat                                                                          |                   |                   |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| dashboard | --url --port 36195                                                                                  | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT |                     |
	|           | -p functional-285400                                                                                |                   |                   |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                              |                   |                   |         |                     |                     |
	| cp        | functional-285400 cp functional-285400:/home/docker/cp-test.txt                                     | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:48 PDT | 28 Apr 24 16:48 PDT |
	|           | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd3335772000\001\cp-test.txt |                   |                   |         |                     |                     |
	| license   |                                                                                                     | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:48 PDT | 28 Apr 24 16:48 PDT |
	| ssh       | functional-285400 ssh sudo                                                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:48 PDT |                     |
	|           | systemctl is-active crio                                                                            |                   |                   |         |                     |                     |
	| ssh       | functional-285400 ssh -n                                                                            | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:48 PDT | 28 Apr 24 16:48 PDT |
	|           | functional-285400 sudo cat                                                                          |                   |                   |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| image     | functional-285400 image load --daemon                                                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:48 PDT |                     |
	|           | gcr.io/google-containers/addon-resizer:functional-285400                                            |                   |                   |         |                     |                     |
	|           | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| cp        | functional-285400 cp                                                                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:48 PDT | 28 Apr 24 16:48 PDT |
	|           | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| ssh       | functional-285400 ssh -n                                                                            | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:48 PDT |                     |
	|           | functional-285400 sudo cat                                                                          |                   |                   |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	|-----------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 16:47:48
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 16:47:48.546696   10508 out.go:291] Setting OutFile to fd 932 ...
	I0428 16:47:48.547686   10508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:47:48.547686   10508 out.go:304] Setting ErrFile to fd 996...
	I0428 16:47:48.547686   10508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:47:48.580285   10508 out.go:298] Setting JSON to false
	I0428 16:47:48.586291   10508 start.go:129] hostinfo: {"hostname":"minikube1","uptime":5511,"bootTime":1714342556,"procs":212,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 16:47:48.586291   10508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 16:47:48.591295   10508 out.go:177] * [functional-285400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 16:47:48.595382   10508 notify.go:220] Checking for updates...
	I0428 16:47:48.597996   10508 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 16:47:48.600555   10508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 16:47:48.603556   10508 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 16:47:48.605554   10508 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 16:47:48.607555   10508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID 'a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID 'cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID '4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID 'ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID '2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID 'e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID 'd944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/d944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID '68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID '8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID '3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID '9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a'"
	Apr 28 23:49:38 functional-285400 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Apr 28 23:49:38 functional-285400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 28 23:49:38 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-28T23:49:40Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +15.453966] systemd-fstab-generator[2367]: Ignoring "noauto" option for root device
	[  +0.220302] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.841050] kauditd_printk_skb: 88 callbacks suppressed
	[Apr28 23:30] kauditd_printk_skb: 10 callbacks suppressed
	[Apr28 23:31] systemd-fstab-generator[3771]: Ignoring "noauto" option for root device
	[  +0.670661] systemd-fstab-generator[3807]: Ignoring "noauto" option for root device
	[  +0.288124] systemd-fstab-generator[3819]: Ignoring "noauto" option for root device
	[  +0.300389] systemd-fstab-generator[3833]: Ignoring "noauto" option for root device
	[  +5.385888] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.904050] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.210204] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.203000] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.314169] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +0.870536] systemd-fstab-generator[4645]: Ignoring "noauto" option for root device
	[  +3.273751] kauditd_printk_skb: 182 callbacks suppressed
	[  +1.459361] systemd-fstab-generator[5366]: Ignoring "noauto" option for root device
	[  +7.557283] kauditd_printk_skb: 53 callbacks suppressed
	[Apr28 23:32] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.770893] systemd-fstab-generator[6392]: Ignoring "noauto" option for root device
	[Apr28 23:35] systemd-fstab-generator[7923]: Ignoring "noauto" option for root device
	[  +0.155288] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.513653] systemd-fstab-generator[7959]: Ignoring "noauto" option for root device
	[  +0.290726] systemd-fstab-generator[7971]: Ignoring "noauto" option for root device
	[  +0.297113] systemd-fstab-generator[7985]: Ignoring "noauto" option for root device
	[  +5.314103] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 23:50:38 up 23 min,  0 users,  load average: 0.05, 0.08, 0.11
	Linux functional-285400 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 28 23:50:35 functional-285400 kubelet[5373]: E0428 23:50:35.015201    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?resourceVersion=0&timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:50:35 functional-285400 kubelet[5373]: E0428 23:50:35.016436    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:50:35 functional-285400 kubelet[5373]: E0428 23:50:35.017326    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:50:35 functional-285400 kubelet[5373]: E0428 23:50:35.018426    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:50:35 functional-285400 kubelet[5373]: E0428 23:50:35.019265    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:50:35 functional-285400 kubelet[5373]: E0428 23:50:35.019375    5373 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 28 23:50:35 functional-285400 kubelet[5373]: E0428 23:50:35.246483    5373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused" interval="7s"
	Apr 28 23:50:35 functional-285400 kubelet[5373]: I0428 23:50:35.842827    5373 status_manager.go:853] "Failed to get status for pod" podUID="f291e154417b21ff4db6980bc8535b89" pod="kube-system/kube-apiserver-functional-285400" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-285400\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:50:35 functional-285400 kubelet[5373]: E0428 23:50:35.964192    5373 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 15m13.591410216s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.373991    5373 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.382692    5373 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.382723    5373 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.383021    5373 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.384439    5373 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.384472    5373 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.384600    5373 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.384865    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.384951    5373 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.384997    5373 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: I0428 23:50:38.385015    5373 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.385594    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.385694    5373 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.386999    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.387295    5373 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.387730    5373 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:48:37.107477    2424 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0428 16:49:37.950058    2424 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:49:37.996981    2424 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:49:38.027038    2424 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:49:38.062091    2424 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:49:38.099053    2424 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:49:38.130032    2424 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:49:38.170709    2424 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:49:38.204223    2424 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400: exit status 2 (11.8875243s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:50:38.886884    9544 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-285400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (188.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (300.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-285400 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1625: (dbg) Non-zero exit: kubectl --context functional-285400 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8: exit status 1 (2.207675s)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://172.27.228.231:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-285400 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-285400 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-285400 describe po hello-node-connect: exit status 1 (2.1921022s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-285400 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-285400 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-285400 logs -l app=hello-node-connect: exit status 1 (2.1785961s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-285400 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-285400 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-285400 describe svc hello-node-connect: exit status 1 (2.1751559s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-285400 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-285400 -n functional-285400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-285400 -n functional-285400: exit status 2 (11.6006772s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:57:01.991442   11012 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 logs -n 25: (4m28.3500166s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|------------|-------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command   |                                  Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|------------|-------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| tunnel     | functional-285400 tunnel                                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:50 PDT |                     |
	|            | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| tunnel     | functional-285400 tunnel                                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:50 PDT |                     |
	|            | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh cat                                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /etc/hostname                                                           |                   |                   |         |                     |                     |
	| addons     | functional-285400 addons list                                           | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	| addons     | functional-285400 addons list                                           | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | -o json                                                                 |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh sudo cat                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /etc/ssl/certs/3228.pem                                                 |                   |                   |         |                     |                     |
	| docker-env | functional-285400 docker-env                                            | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT |                     |
	| ssh        | functional-285400 ssh sudo cat                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /usr/share/ca-certificates/3228.pem                                     |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh sudo cat                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /etc/ssl/certs/51391683.0                                               |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh sudo cat                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /etc/ssl/certs/32282.pem                                                |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh sudo cat                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /usr/share/ca-certificates/32282.pem                                    |                   |                   |         |                     |                     |
	| image      | functional-285400 image load --daemon                                   | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:52 PDT |
	|            | gcr.io/google-containers/addon-resizer:functional-285400                |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh sudo cat                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /etc/ssl/certs/3ec20f2e.0                                               |                   |                   |         |                     |                     |
	| service    | functional-285400 service list                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	| service    | functional-285400 service list                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	|            | -o json                                                                 |                   |                   |         |                     |                     |
	| service    | functional-285400 service                                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	|            | --namespace=default --https                                             |                   |                   |         |                     |                     |
	|            | --url hello-node                                                        |                   |                   |         |                     |                     |
	| service    | functional-285400                                                       | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	|            | service hello-node --url                                                |                   |                   |         |                     |                     |
	|            | --format={{.IP}}                                                        |                   |                   |         |                     |                     |
	| service    | functional-285400 service                                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	|            | hello-node --url                                                        |                   |                   |         |                     |                     |
	| image      | functional-285400 image ls                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT | 28 Apr 24 16:53 PDT |
	| image      | functional-285400 image save                                            | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:53 PDT | 28 Apr 24 16:54 PDT |
	|            | gcr.io/google-containers/addon-resizer:functional-285400                |                   |                   |         |                     |                     |
	|            | C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| image      | functional-285400 image rm                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:54 PDT | 28 Apr 24 16:55 PDT |
	|            | gcr.io/google-containers/addon-resizer:functional-285400                |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| image      | functional-285400 image ls                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:55 PDT | 28 Apr 24 16:56 PDT |
	| image      | functional-285400 image load                                            | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:56 PDT |                     |
	|            | C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| image      | functional-285400 image save --daemon                                   | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:56 PDT |                     |
	|            | gcr.io/google-containers/addon-resizer:functional-285400                |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh sudo cat                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:56 PDT | 28 Apr 24 16:57 PDT |
	|            | /etc/test/nested/copy/3228/hosts                                        |                   |                   |         |                     |                     |
	|------------|-------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 16:47:48
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 16:47:48.546696   10508 out.go:291] Setting OutFile to fd 932 ...
	I0428 16:47:48.547686   10508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:47:48.547686   10508 out.go:304] Setting ErrFile to fd 996...
	I0428 16:47:48.547686   10508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:47:48.580285   10508 out.go:298] Setting JSON to false
	I0428 16:47:48.586291   10508 start.go:129] hostinfo: {"hostname":"minikube1","uptime":5511,"bootTime":1714342556,"procs":212,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 16:47:48.586291   10508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 16:47:48.591295   10508 out.go:177] * [functional-285400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 16:47:48.595382   10508 notify.go:220] Checking for updates...
	I0428 16:47:48.597996   10508 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 16:47:48.600555   10508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 16:47:48.603556   10508 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 16:47:48.605554   10508 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 16:47:48.607555   10508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Apr 29 00:00:41 functional-285400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID 'd944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/d944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID 'adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID 'a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e'"
	Apr 29 00:00:41 functional-285400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID '9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID 'ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID '68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID '0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID '0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID '3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID '4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID '68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID 'cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-29T00:00:43Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.300389] systemd-fstab-generator[3833]: Ignoring "noauto" option for root device
	[  +5.385888] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.904050] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.210204] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.203000] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.314169] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +0.870536] systemd-fstab-generator[4645]: Ignoring "noauto" option for root device
	[  +3.273751] kauditd_printk_skb: 182 callbacks suppressed
	[  +1.459361] systemd-fstab-generator[5366]: Ignoring "noauto" option for root device
	[  +7.557283] kauditd_printk_skb: 53 callbacks suppressed
	[Apr28 23:32] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.770893] systemd-fstab-generator[6392]: Ignoring "noauto" option for root device
	[Apr28 23:35] systemd-fstab-generator[7923]: Ignoring "noauto" option for root device
	[  +0.155288] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.513653] systemd-fstab-generator[7959]: Ignoring "noauto" option for root device
	[  +0.290726] systemd-fstab-generator[7971]: Ignoring "noauto" option for root device
	[  +0.297113] systemd-fstab-generator[7985]: Ignoring "noauto" option for root device
	[  +5.314103] kauditd_printk_skb: 89 callbacks suppressed
	[Apr28 23:51] systemd-fstab-generator[12864]: Ignoring "noauto" option for root device
	[ +20.434761] systemd-fstab-generator[12973]: Ignoring "noauto" option for root device
	[  +0.147556] kauditd_printk_skb: 12 callbacks suppressed
	[Apr28 23:55] systemd-fstab-generator[14266]: Ignoring "noauto" option for root device
	[  +0.129995] kauditd_printk_skb: 12 callbacks suppressed
	[Apr28 23:56] systemd-fstab-generator[14547]: Ignoring "noauto" option for root device
	[  +0.170897] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:01:41 up 34 min,  0 users,  load average: 0.06, 0.04, 0.06
	Linux functional-285400 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 29 00:01:38 functional-285400 kubelet[5373]: E0429 00:01:38.322101    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 29 00:01:38 functional-285400 kubelet[5373]: E0429 00:01:38.324543    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 29 00:01:38 functional-285400 kubelet[5373]: E0429 00:01:38.326040    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 29 00:01:38 functional-285400 kubelet[5373]: E0429 00:01:38.327781    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 29 00:01:38 functional-285400 kubelet[5373]: E0429 00:01:38.327823    5373 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 29 00:01:38 functional-285400 kubelet[5373]: E0429 00:01:38.618415    5373 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-285400.17ca95d10e8897e6\": dial tcp 172.27.228.231:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-285400.17ca95d10e8897e6  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-285400,UID:f291e154417b21ff4db6980bc8535b89,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.27.228.231:8441/readyz\": dial tcp 172.27.228.231:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-285400,},FirstTimestamp:2024-04-28 23:35:33.292431334 +0000 UTC m=+227.703458592,LastTimes
tamp:2024-04-28 23:35:34.481599155 +0000 UTC m=+228.892626413,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-285400,}"
	Apr 29 00:01:38 functional-285400 kubelet[5373]: E0429 00:01:38.618775    5373 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-apiserver-functional-285400.17ca95d15569dab3  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-285400,UID:f291e154417b21ff4db6980bc8535b89,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.27.228.231:8441/readyz\": dial tcp 172.27.228.231:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-285400,},FirstTimestamp:2024-04-28 23:35:34.481599155 +0000 UTC m=+228.892626413,LastTimestamp:2024-04-28 23:35:34.481599155 +0000 UTC m=+228.892626413,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional
-285400,}"
	Apr 29 00:01:38 functional-285400 kubelet[5373]: E0429 00:01:38.620100    5373 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-285400.17ca95d10e8897e6\": dial tcp 172.27.228.231:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-285400.17ca95d10e8897e6  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-285400,UID:f291e154417b21ff4db6980bc8535b89,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.27.228.231:8441/readyz\": dial tcp 172.27.228.231:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-285400,},FirstTimestamp:2024-04-28 23:35:33.292431334 +0000 UTC m=+227.703458592,LastTimes
tamp:2024-04-28 23:35:35.481432782 +0000 UTC m=+229.892459940,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-285400,}"
	Apr 29 00:01:40 functional-285400 kubelet[5373]: E0429 00:01:40.494247    5373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused" interval="7s"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.093770    5373 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 26m18.720866346s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.415566    5373 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.416200    5373 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.416392    5373 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.418223    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.424949    5373 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.429310    5373 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.429350    5373 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.429448    5373 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.429522    5373 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: I0429 00:01:41.429540    5373 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.429617    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.429691    5373 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.432437    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.432506    5373 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.433012    5373 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:57:13.594930    6108 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0428 16:57:40.485656    6108 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:58:40.662908    6108 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:58:40.719008    6108 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:58:40.807280    6108 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.45/containers/json?all=1&filters=%7B%22name%22%3A%7B%22k8s_kube-scheduler%22%3Atrue%7D%7D": dial unix /var/run/docker.sock: connect: permission denied
	E0428 16:59:40.937807    6108 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:59:40.980631    6108 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:59:41.023937    6108 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 17:00:41.179684    6108 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400: exit status 2 (12.211756s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:01:41.961666    4772 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-285400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (300.98s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (492.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
E0428 16:50:36.428838    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.27.228.231:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": context deadline exceeded
functional_test_pvc_test.go:44: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400
functional_test_pvc_test.go:44: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400: exit status 2 (11.8818676s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:52:41.126366   10636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test_pvc_test.go:44: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:44: "functional-285400" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-285400 -n functional-285400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-285400 -n functional-285400: exit status 2 (11.4412714s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:52:53.021508    6692 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 logs -n 25: (3m36.3738364s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|------------|----------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command   |                           Args                           |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|------------|----------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh        | functional-285400 ssh -n                                 | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:48 PDT | 28 Apr 24 16:48 PDT |
	|            | functional-285400 sudo cat                               |                   |                   |         |                     |                     |
	|            | /tmp/does/not/exist/cp-test.txt                          |                   |                   |         |                     |                     |
	| image      | functional-285400 image ls                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:48 PDT | 28 Apr 24 16:49 PDT |
	| image      | functional-285400 image load --daemon                    | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:49 PDT | 28 Apr 24 16:50 PDT |
	|            | gcr.io/google-containers/addon-resizer:functional-285400 |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                        |                   |                   |         |                     |                     |
	| image      | functional-285400 image ls                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:50 PDT | 28 Apr 24 16:51 PDT |
	| ssh        | functional-285400 ssh echo                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:50 PDT | 28 Apr 24 16:51 PDT |
	|            | hello                                                    |                   |                   |         |                     |                     |
	| tunnel     | functional-285400 tunnel                                 | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:50 PDT |                     |
	|            | --alsologtostderr                                        |                   |                   |         |                     |                     |
	| tunnel     | functional-285400 tunnel                                 | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:50 PDT |                     |
	|            | --alsologtostderr                                        |                   |                   |         |                     |                     |
	| tunnel     | functional-285400 tunnel                                 | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:50 PDT |                     |
	|            | --alsologtostderr                                        |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh cat                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /etc/hostname                                            |                   |                   |         |                     |                     |
	| addons     | functional-285400 addons list                            | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	| addons     | functional-285400 addons list                            | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | -o json                                                  |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh sudo cat                           | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /etc/ssl/certs/3228.pem                                  |                   |                   |         |                     |                     |
	| docker-env | functional-285400 docker-env                             | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT |                     |
	| ssh        | functional-285400 ssh sudo cat                           | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /usr/share/ca-certificates/3228.pem                      |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh sudo cat                           | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /etc/ssl/certs/51391683.0                                |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh sudo cat                           | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /etc/ssl/certs/32282.pem                                 |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh sudo cat                           | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /usr/share/ca-certificates/32282.pem                     |                   |                   |         |                     |                     |
	| image      | functional-285400 image load --daemon                    | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:52 PDT |
	|            | gcr.io/google-containers/addon-resizer:functional-285400 |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                        |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh sudo cat                           | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /etc/ssl/certs/3ec20f2e.0                                |                   |                   |         |                     |                     |
	| service    | functional-285400 service list                           | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	| service    | functional-285400 service list                           | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	|            | -o json                                                  |                   |                   |         |                     |                     |
	| service    | functional-285400 service                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	|            | --namespace=default --https                              |                   |                   |         |                     |                     |
	|            | --url hello-node                                         |                   |                   |         |                     |                     |
	| service    | functional-285400                                        | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	|            | service hello-node --url                                 |                   |                   |         |                     |                     |
	|            | --format={{.IP}}                                         |                   |                   |         |                     |                     |
	| service    | functional-285400 service                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	|            | hello-node --url                                         |                   |                   |         |                     |                     |
	| image      | functional-285400 image ls                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	|------------|----------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 16:47:48
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 16:47:48.546696   10508 out.go:291] Setting OutFile to fd 932 ...
	I0428 16:47:48.547686   10508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:47:48.547686   10508 out.go:304] Setting ErrFile to fd 996...
	I0428 16:47:48.547686   10508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:47:48.580285   10508 out.go:298] Setting JSON to false
	I0428 16:47:48.586291   10508 start.go:129] hostinfo: {"hostname":"minikube1","uptime":5511,"bootTime":1714342556,"procs":212,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 16:47:48.586291   10508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 16:47:48.591295   10508 out.go:177] * [functional-285400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 16:47:48.595382   10508 notify.go:220] Checking for updates...
	I0428 16:47:48.597996   10508 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 16:47:48.600555   10508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 16:47:48.603556   10508 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 16:47:48.605554   10508 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 16:47:48.607555   10508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID '68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399'"
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="error getting RW layer size for container ID '3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e'"
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="error getting RW layer size for container ID 'a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e'"
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="error getting RW layer size for container ID 'ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438'"
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="error getting RW layer size for container ID '4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28'"
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="error getting RW layer size for container ID '433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID '433fcffb54c950651a6381bbf5f1c81d14c618d1f243f84732643426a7414bff'"
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="error getting RW layer size for container ID '9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a'"
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="error getting RW layer size for container ID 'e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:55:39 functional-285400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e'"
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="error getting RW layer size for container ID 'adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c'"
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="error getting RW layer size for container ID '2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:55:39 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:55:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e'"
	Apr 28 23:55:39 functional-285400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 28 23:55:39 functional-285400 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 28 23:55:39 functional-285400 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Apr 28 23:55:39 functional-285400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 28 23:55:39 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-28T23:55:42Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.300389] systemd-fstab-generator[3833]: Ignoring "noauto" option for root device
	[  +5.385888] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.904050] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.210204] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.203000] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.314169] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +0.870536] systemd-fstab-generator[4645]: Ignoring "noauto" option for root device
	[  +3.273751] kauditd_printk_skb: 182 callbacks suppressed
	[  +1.459361] systemd-fstab-generator[5366]: Ignoring "noauto" option for root device
	[  +7.557283] kauditd_printk_skb: 53 callbacks suppressed
	[Apr28 23:32] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.770893] systemd-fstab-generator[6392]: Ignoring "noauto" option for root device
	[Apr28 23:35] systemd-fstab-generator[7923]: Ignoring "noauto" option for root device
	[  +0.155288] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.513653] systemd-fstab-generator[7959]: Ignoring "noauto" option for root device
	[  +0.290726] systemd-fstab-generator[7971]: Ignoring "noauto" option for root device
	[  +0.297113] systemd-fstab-generator[7985]: Ignoring "noauto" option for root device
	[  +5.314103] kauditd_printk_skb: 89 callbacks suppressed
	[Apr28 23:51] systemd-fstab-generator[12864]: Ignoring "noauto" option for root device
	[ +20.434761] systemd-fstab-generator[12973]: Ignoring "noauto" option for root device
	[  +0.147556] kauditd_printk_skb: 12 callbacks suppressed
	[Apr28 23:55] systemd-fstab-generator[14266]: Ignoring "noauto" option for root device
	[  +0.129995] kauditd_printk_skb: 12 callbacks suppressed
	[Apr28 23:56] systemd-fstab-generator[14547]: Ignoring "noauto" option for root device
	[  +0.170897] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 23:56:40 up 29 min,  0 users,  load average: 0.00, 0.02, 0.07
	Linux functional-285400 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 28 23:56:32 functional-285400 kubelet[5373]: E0428 23:56:32.880615    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?resourceVersion=0&timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:56:32 functional-285400 kubelet[5373]: E0428 23:56:32.881774    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:56:32 functional-285400 kubelet[5373]: E0428 23:56:32.882808    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:56:32 functional-285400 kubelet[5373]: E0428 23:56:32.883944    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:56:32 functional-285400 kubelet[5373]: E0428 23:56:32.884935    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:56:32 functional-285400 kubelet[5373]: E0428 23:56:32.884960    5373 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 28 23:56:35 functional-285400 kubelet[5373]: I0428 23:56:35.842030    5373 status_manager.go:853] "Failed to get status for pod" podUID="f291e154417b21ff4db6980bc8535b89" pod="kube-system/kube-apiserver-functional-285400" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-285400\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:56:36 functional-285400 kubelet[5373]: E0428 23:56:36.034035    5373 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 21m13.66125319s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 28 23:56:38 functional-285400 kubelet[5373]: E0428 23:56:38.499117    5373 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/etcd-functional-285400.17ca95ceca63525c\": dial tcp 172.27.228.231:8441: connect: connection refused" event="&Event{ObjectMeta:{etcd-functional-285400.17ca95ceca63525c  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-functional-285400,UID:35dedd627fdfea3b9aff90de42393f4a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Liveness probe failed: Get \"http://127.0.0.1:2381/health?exclude=NOSPACE&serializable=true\": dial tcp 127.0.0.1:2381: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-285400,},FirstTimestamp:2024-04-28 23:35:23.55920342 +0000 UTC m=+217.970230578,LastTimestamp:2024-04-28 23:
35:33.55942145 +0000 UTC m=+227.970448608,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-285400,}"
	Apr 28 23:56:39 functional-285400 kubelet[5373]: E0428 23:56:39.383263    5373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused" interval="7s"
	Apr 28 23:56:40 functional-285400 kubelet[5373]: E0428 23:56:40.119696    5373 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 28 23:56:40 functional-285400 kubelet[5373]: E0428 23:56:40.122775    5373 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:56:40 functional-285400 kubelet[5373]: E0428 23:56:40.122811    5373 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:56:40 functional-285400 kubelet[5373]: E0428 23:56:40.132820    5373 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:56:40 functional-285400 kubelet[5373]: E0428 23:56:40.132877    5373 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:56:40 functional-285400 kubelet[5373]: E0428 23:56:40.132963    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 28 23:56:40 functional-285400 kubelet[5373]: E0428 23:56:40.133003    5373 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:56:40 functional-285400 kubelet[5373]: E0428 23:56:40.133030    5373 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 28 23:56:40 functional-285400 kubelet[5373]: E0428 23:56:40.133058    5373 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:56:40 functional-285400 kubelet[5373]: I0428 23:56:40.133071    5373 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:56:40 functional-285400 kubelet[5373]: E0428 23:56:40.133108    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 28 23:56:40 functional-285400 kubelet[5373]: E0428 23:56:40.133237    5373 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:56:40 functional-285400 kubelet[5373]: E0428 23:56:40.134942    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 28 23:56:40 functional-285400 kubelet[5373]: E0428 23:56:40.135052    5373 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 28 23:56:40 functional-285400 kubelet[5373]: E0428 23:56:40.135971    5373 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:53:04.447911    7608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0428 16:53:39.143602    7608 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:53:39.201858    7608 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:53:39.277637    7608 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:54:39.428662    7608 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:54:39.470754    7608 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:54:39.509072    7608 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:55:39.658073    7608 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:55:39.703997    7608 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400: exit status 2 (12.3716081s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:56:40.848693   12772 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-285400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (492.09s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (292.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-285400 replace --force -f testdata\mysql.yaml
functional_test.go:1789: (dbg) Non-zero exit: kubectl --context functional-285400 replace --force -f testdata\mysql.yaml: exit status 1 (4.2258828s)

                                                
                                                
** stderr ** 
	error when deleting "testdata\\mysql.yaml": Delete "https://172.27.228.231:8441/api/v1/namespaces/default/services/mysql": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
	error when deleting "testdata\\mysql.yaml": Delete "https://172.27.228.231:8441/apis/apps/v1/namespaces/default/deployments/mysql": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1791: failed to kubectl replace mysql: args "kubectl --context functional-285400 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-285400 -n functional-285400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-285400 -n functional-285400: exit status 2 (11.6821612s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:57:06.397406    6020 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 logs -n 25: (4m23.8886942s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|------------|-------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command   |                                  Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|------------|-------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| tunnel     | functional-285400 tunnel                                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:50 PDT |                     |
	|            | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| tunnel     | functional-285400 tunnel                                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:50 PDT |                     |
	|            | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh cat                                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /etc/hostname                                                           |                   |                   |         |                     |                     |
	| addons     | functional-285400 addons list                                           | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	| addons     | functional-285400 addons list                                           | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | -o json                                                                 |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh sudo cat                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /etc/ssl/certs/3228.pem                                                 |                   |                   |         |                     |                     |
	| docker-env | functional-285400 docker-env                                            | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT |                     |
	| ssh        | functional-285400 ssh sudo cat                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /usr/share/ca-certificates/3228.pem                                     |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh sudo cat                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /etc/ssl/certs/51391683.0                                               |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh sudo cat                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /etc/ssl/certs/32282.pem                                                |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh sudo cat                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /usr/share/ca-certificates/32282.pem                                    |                   |                   |         |                     |                     |
	| image      | functional-285400 image load --daemon                                   | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:52 PDT |
	|            | gcr.io/google-containers/addon-resizer:functional-285400                |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh sudo cat                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|            | /etc/ssl/certs/3ec20f2e.0                                               |                   |                   |         |                     |                     |
	| service    | functional-285400 service list                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	| service    | functional-285400 service list                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	|            | -o json                                                                 |                   |                   |         |                     |                     |
	| service    | functional-285400 service                                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	|            | --namespace=default --https                                             |                   |                   |         |                     |                     |
	|            | --url hello-node                                                        |                   |                   |         |                     |                     |
	| service    | functional-285400                                                       | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	|            | service hello-node --url                                                |                   |                   |         |                     |                     |
	|            | --format={{.IP}}                                                        |                   |                   |         |                     |                     |
	| service    | functional-285400 service                                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	|            | hello-node --url                                                        |                   |                   |         |                     |                     |
	| image      | functional-285400 image ls                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT | 28 Apr 24 16:53 PDT |
	| image      | functional-285400 image save                                            | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:53 PDT | 28 Apr 24 16:54 PDT |
	|            | gcr.io/google-containers/addon-resizer:functional-285400                |                   |                   |         |                     |                     |
	|            | C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| image      | functional-285400 image rm                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:54 PDT | 28 Apr 24 16:55 PDT |
	|            | gcr.io/google-containers/addon-resizer:functional-285400                |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| image      | functional-285400 image ls                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:55 PDT | 28 Apr 24 16:56 PDT |
	| image      | functional-285400 image load                                            | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:56 PDT |                     |
	|            | C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| image      | functional-285400 image save --daemon                                   | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:56 PDT |                     |
	|            | gcr.io/google-containers/addon-resizer:functional-285400                |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| ssh        | functional-285400 ssh sudo cat                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:56 PDT | 28 Apr 24 16:57 PDT |
	|            | /etc/test/nested/copy/3228/hosts                                        |                   |                   |         |                     |                     |
	|------------|-------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 16:47:48
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 16:47:48.546696   10508 out.go:291] Setting OutFile to fd 932 ...
	I0428 16:47:48.547686   10508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:47:48.547686   10508 out.go:304] Setting ErrFile to fd 996...
	I0428 16:47:48.547686   10508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:47:48.580285   10508 out.go:298] Setting JSON to false
	I0428 16:47:48.586291   10508 start.go:129] hostinfo: {"hostname":"minikube1","uptime":5511,"bootTime":1714342556,"procs":212,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 16:47:48.586291   10508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 16:47:48.591295   10508 out.go:177] * [functional-285400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 16:47:48.595382   10508 notify.go:220] Checking for updates...
	I0428 16:47:48.597996   10508 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 16:47:48.600555   10508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 16:47:48.603556   10508 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 16:47:48.605554   10508 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 16:47:48.607555   10508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Apr 29 00:00:41 functional-285400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID 'd944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/d944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID 'adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'adf0e04b0c300d268c0b2892e2bbd6cde30f5bda06a98ad55187b745ea95db8c'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID 'a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e'"
	Apr 29 00:00:41 functional-285400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID '9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID 'ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID '68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID '0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID '0a13487c372a63c23aeae3f66286a82014bfbc06276bb907ae303a767759a275'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID '3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID '4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID '68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399'"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="error getting RW layer size for container ID 'cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:00:41 functional-285400 cri-dockerd[4496]: time="2024-04-29T00:00:41Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-29T00:00:43Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.300389] systemd-fstab-generator[3833]: Ignoring "noauto" option for root device
	[  +5.385888] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.904050] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.210204] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.203000] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.314169] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +0.870536] systemd-fstab-generator[4645]: Ignoring "noauto" option for root device
	[  +3.273751] kauditd_printk_skb: 182 callbacks suppressed
	[  +1.459361] systemd-fstab-generator[5366]: Ignoring "noauto" option for root device
	[  +7.557283] kauditd_printk_skb: 53 callbacks suppressed
	[Apr28 23:32] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.770893] systemd-fstab-generator[6392]: Ignoring "noauto" option for root device
	[Apr28 23:35] systemd-fstab-generator[7923]: Ignoring "noauto" option for root device
	[  +0.155288] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.513653] systemd-fstab-generator[7959]: Ignoring "noauto" option for root device
	[  +0.290726] systemd-fstab-generator[7971]: Ignoring "noauto" option for root device
	[  +0.297113] systemd-fstab-generator[7985]: Ignoring "noauto" option for root device
	[  +5.314103] kauditd_printk_skb: 89 callbacks suppressed
	[Apr28 23:51] systemd-fstab-generator[12864]: Ignoring "noauto" option for root device
	[ +20.434761] systemd-fstab-generator[12973]: Ignoring "noauto" option for root device
	[  +0.147556] kauditd_printk_skb: 12 callbacks suppressed
	[Apr28 23:55] systemd-fstab-generator[14266]: Ignoring "noauto" option for root device
	[  +0.129995] kauditd_printk_skb: 12 callbacks suppressed
	[Apr28 23:56] systemd-fstab-generator[14547]: Ignoring "noauto" option for root device
	[  +0.170897] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:01:41 up 34 min,  0 users,  load average: 0.06, 0.04, 0.06
	Linux functional-285400 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 29 00:01:38 functional-285400 kubelet[5373]: E0429 00:01:38.322101    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 29 00:01:38 functional-285400 kubelet[5373]: E0429 00:01:38.324543    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 29 00:01:38 functional-285400 kubelet[5373]: E0429 00:01:38.326040    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 29 00:01:38 functional-285400 kubelet[5373]: E0429 00:01:38.327781    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 29 00:01:38 functional-285400 kubelet[5373]: E0429 00:01:38.327823    5373 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 29 00:01:38 functional-285400 kubelet[5373]: E0429 00:01:38.618415    5373 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-285400.17ca95d10e8897e6\": dial tcp 172.27.228.231:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-285400.17ca95d10e8897e6  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-285400,UID:f291e154417b21ff4db6980bc8535b89,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.27.228.231:8441/readyz\": dial tcp 172.27.228.231:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-285400,},FirstTimestamp:2024-04-28 23:35:33.292431334 +0000 UTC m=+227.703458592,LastTimes
tamp:2024-04-28 23:35:34.481599155 +0000 UTC m=+228.892626413,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-285400,}"
	Apr 29 00:01:38 functional-285400 kubelet[5373]: E0429 00:01:38.618775    5373 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-apiserver-functional-285400.17ca95d15569dab3  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-285400,UID:f291e154417b21ff4db6980bc8535b89,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.27.228.231:8441/readyz\": dial tcp 172.27.228.231:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-285400,},FirstTimestamp:2024-04-28 23:35:34.481599155 +0000 UTC m=+228.892626413,LastTimestamp:2024-04-28 23:35:34.481599155 +0000 UTC m=+228.892626413,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional
-285400,}"
	Apr 29 00:01:38 functional-285400 kubelet[5373]: E0429 00:01:38.620100    5373 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-285400.17ca95d10e8897e6\": dial tcp 172.27.228.231:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-285400.17ca95d10e8897e6  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-285400,UID:f291e154417b21ff4db6980bc8535b89,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.27.228.231:8441/readyz\": dial tcp 172.27.228.231:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-285400,},FirstTimestamp:2024-04-28 23:35:33.292431334 +0000 UTC m=+227.703458592,LastTimes
tamp:2024-04-28 23:35:35.481432782 +0000 UTC m=+229.892459940,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-285400,}"
	Apr 29 00:01:40 functional-285400 kubelet[5373]: E0429 00:01:40.494247    5373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused" interval="7s"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.093770    5373 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 26m18.720866346s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.415566    5373 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.416200    5373 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.416392    5373 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.418223    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.424949    5373 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.429310    5373 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.429350    5373 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.429448    5373 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.429522    5373 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: I0429 00:01:41.429540    5373 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.429617    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.429691    5373 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.432437    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.432506    5373 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 29 00:01:41 functional-285400 kubelet[5373]: E0429 00:01:41.433012    5373 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:57:18.078209    6792 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0428 16:57:40.485314    6792 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:58:40.662683    6792 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:58:40.703310    6792 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:58:40.765256    6792 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:59:40.938734    6792 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:59:40.977357    6792 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:59:41.028939    6792 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 17:00:41.170444    6792 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400: exit status 2 (12.1544539s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:01:42.023865    3400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-285400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/MySQL (292.02s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (154.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-285400 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-285400 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (2.231036s)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-285400 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-285400 -n functional-285400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-285400 -n functional-285400: exit status 2 (14.2666349s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:48:18.598264    6296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 logs -n 25: (2m5.9775495s)
helpers_test.go:252: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|-----------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command  |                                                Args                                                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|-----------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh       | functional-285400 ssh                                                                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT |                     |
	|           | sudo crictl inspecti                                                                                |                   |                   |         |                     |                     |
	|           | registry.k8s.io/pause:latest                                                                        |                   |                   |         |                     |                     |
	| cache     | functional-285400 cache reload                                                                      | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	| ssh       | functional-285400 ssh                                                                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|           | sudo crictl inspecti                                                                                |                   |                   |         |                     |                     |
	|           | registry.k8s.io/pause:latest                                                                        |                   |                   |         |                     |                     |
	| cache     | delete                                                                                              | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|           | registry.k8s.io/pause:3.1                                                                           |                   |                   |         |                     |                     |
	| cache     | delete                                                                                              | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|           | registry.k8s.io/pause:latest                                                                        |                   |                   |         |                     |                     |
	| kubectl   | functional-285400 kubectl --                                                                        | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:33 PDT | 28 Apr 24 16:33 PDT |
	|           | --context functional-285400                                                                         |                   |                   |         |                     |                     |
	|           | get pods                                                                                            |                   |                   |         |                     |                     |
	| start     | -p functional-285400                                                                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:34 PDT |                     |
	|           | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                            |                   |                   |         |                     |                     |
	|           | --wait=all                                                                                          |                   |                   |         |                     |                     |
	| config    | functional-285400 config unset                                                                      | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT | 28 Apr 24 16:47 PDT |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| cp        | functional-285400 cp                                                                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT | 28 Apr 24 16:47 PDT |
	|           | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| config    | functional-285400 config get                                                                        | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT |                     |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| config    | functional-285400 config set                                                                        | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT | 28 Apr 24 16:47 PDT |
	|           | cpus 2                                                                                              |                   |                   |         |                     |                     |
	| config    | functional-285400 config get                                                                        | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT | 28 Apr 24 16:47 PDT |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| config    | functional-285400 config unset                                                                      | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT | 28 Apr 24 16:47 PDT |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| config    | functional-285400 config get                                                                        | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT |                     |
	|           | cpus                                                                                                |                   |                   |         |                     |                     |
	| start     | -p functional-285400                                                                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT |                     |
	|           | --dry-run --memory                                                                                  |                   |                   |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                             |                   |                   |         |                     |                     |
	|           | --driver=hyperv                                                                                     |                   |                   |         |                     |                     |
	| start     | -p functional-285400                                                                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT |                     |
	|           | --dry-run --memory                                                                                  |                   |                   |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                             |                   |                   |         |                     |                     |
	|           | --driver=hyperv                                                                                     |                   |                   |         |                     |                     |
	| ssh       | functional-285400 ssh -n                                                                            | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT | 28 Apr 24 16:48 PDT |
	|           | functional-285400 sudo cat                                                                          |                   |                   |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| dashboard | --url --port 36195                                                                                  | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:47 PDT |                     |
	|           | -p functional-285400                                                                                |                   |                   |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                              |                   |                   |         |                     |                     |
	| cp        | functional-285400 cp functional-285400:/home/docker/cp-test.txt                                     | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:48 PDT | 28 Apr 24 16:48 PDT |
	|           | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd3335772000\001\cp-test.txt |                   |                   |         |                     |                     |
	| license   |                                                                                                     | minikube          | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:48 PDT | 28 Apr 24 16:48 PDT |
	| ssh       | functional-285400 ssh sudo                                                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:48 PDT |                     |
	|           | systemctl is-active crio                                                                            |                   |                   |         |                     |                     |
	| ssh       | functional-285400 ssh -n                                                                            | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:48 PDT | 28 Apr 24 16:48 PDT |
	|           | functional-285400 sudo cat                                                                          |                   |                   |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| image     | functional-285400 image load --daemon                                                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:48 PDT |                     |
	|           | gcr.io/google-containers/addon-resizer:functional-285400                                            |                   |                   |         |                     |                     |
	|           | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| cp        | functional-285400 cp                                                                                | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:48 PDT | 28 Apr 24 16:48 PDT |
	|           | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| ssh       | functional-285400 ssh -n                                                                            | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:48 PDT |                     |
	|           | functional-285400 sudo cat                                                                          |                   |                   |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	|-----------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 16:47:48
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 16:47:48.546696   10508 out.go:291] Setting OutFile to fd 932 ...
	I0428 16:47:48.547686   10508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:47:48.547686   10508 out.go:304] Setting ErrFile to fd 996...
	I0428 16:47:48.547686   10508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:47:48.580285   10508 out.go:298] Setting JSON to false
	I0428 16:47:48.586291   10508 start.go:129] hostinfo: {"hostname":"minikube1","uptime":5511,"bootTime":1714342556,"procs":212,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 16:47:48.586291   10508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 16:47:48.591295   10508 out.go:177] * [functional-285400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 16:47:48.595382   10508 notify.go:220] Checking for updates...
	I0428 16:47:48.597996   10508 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 16:47:48.600555   10508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 16:47:48.603556   10508 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 16:47:48.605554   10508 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 16:47:48.607555   10508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID 'a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a0de6e012e89dacb3974cf710d9bc96ee3fbc92683816e4666ecffcf662fc92e'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID 'cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'cbf5b97235b0d424584669e34492dd43b20a854168c6d7ccc74beecafeca4f36'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID '4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4ed6581dd266f5773a4271d07bf90e39e5bd8b12dfbd1aa6245f8f7e38402c28'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID 'ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ad09f3881d270059cc24244ae9a6cceaa54e1aeb3caf65a4730287ab21242438'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID '2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2bb72a14bc213a1f796565b47609bc90aaef92a886eb94f50221fb701913067e'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID 'e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'e12ec17c7cd66d2557c8b6c2002cd859f2c569113a2acde2d894e4e3a29a018e'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID 'd944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/d944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd944ce960b21e08f000643fd3e253b0f3b5cb7e43b2eb8f490a08b847a324e17'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID '68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '68a91ddb28289fc7d8c05ceebacd2cd7cadbd30cfc59407e807a3b0c5d346399'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID '8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8f29a8fbd5b2483070125f6550275c97d7c2e5165e19b6286cafefd0b35929aa'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID '3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3d99d3b4a2452acbeaaf03a77ca1fb994f238b78f15d640f2971b4bd2e75858e'"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="error getting RW layer size for container ID '9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:49:37 functional-285400 cri-dockerd[4496]: time="2024-04-28T23:49:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9d14cad0dcbb41d1f88b3d4b73c79e8d511d7b8ccaa6063838191dcb8dcb1c4a'"
	Apr 28 23:49:38 functional-285400 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Apr 28 23:49:38 functional-285400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 28 23:49:38 functional-285400 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-28T23:49:40Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +15.453966] systemd-fstab-generator[2367]: Ignoring "noauto" option for root device
	[  +0.220302] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.841050] kauditd_printk_skb: 88 callbacks suppressed
	[Apr28 23:30] kauditd_printk_skb: 10 callbacks suppressed
	[Apr28 23:31] systemd-fstab-generator[3771]: Ignoring "noauto" option for root device
	[  +0.670661] systemd-fstab-generator[3807]: Ignoring "noauto" option for root device
	[  +0.288124] systemd-fstab-generator[3819]: Ignoring "noauto" option for root device
	[  +0.300389] systemd-fstab-generator[3833]: Ignoring "noauto" option for root device
	[  +5.385888] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.904050] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.210204] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.203000] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.314169] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +0.870536] systemd-fstab-generator[4645]: Ignoring "noauto" option for root device
	[  +3.273751] kauditd_printk_skb: 182 callbacks suppressed
	[  +1.459361] systemd-fstab-generator[5366]: Ignoring "noauto" option for root device
	[  +7.557283] kauditd_printk_skb: 53 callbacks suppressed
	[Apr28 23:32] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.770893] systemd-fstab-generator[6392]: Ignoring "noauto" option for root device
	[Apr28 23:35] systemd-fstab-generator[7923]: Ignoring "noauto" option for root device
	[  +0.155288] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.513653] systemd-fstab-generator[7959]: Ignoring "noauto" option for root device
	[  +0.290726] systemd-fstab-generator[7971]: Ignoring "noauto" option for root device
	[  +0.297113] systemd-fstab-generator[7985]: Ignoring "noauto" option for root device
	[  +5.314103] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 23:50:38 up 23 min,  0 users,  load average: 0.05, 0.08, 0.11
	Linux functional-285400 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 28 23:50:35 functional-285400 kubelet[5373]: E0428 23:50:35.015201    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?resourceVersion=0&timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:50:35 functional-285400 kubelet[5373]: E0428 23:50:35.016436    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:50:35 functional-285400 kubelet[5373]: E0428 23:50:35.017326    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:50:35 functional-285400 kubelet[5373]: E0428 23:50:35.018426    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:50:35 functional-285400 kubelet[5373]: E0428 23:50:35.019265    5373 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-285400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:50:35 functional-285400 kubelet[5373]: E0428 23:50:35.019375    5373 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 28 23:50:35 functional-285400 kubelet[5373]: E0428 23:50:35.246483    5373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-285400?timeout=10s\": dial tcp 172.27.228.231:8441: connect: connection refused" interval="7s"
	Apr 28 23:50:35 functional-285400 kubelet[5373]: I0428 23:50:35.842827    5373 status_manager.go:853] "Failed to get status for pod" podUID="f291e154417b21ff4db6980bc8535b89" pod="kube-system/kube-apiserver-functional-285400" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-285400\": dial tcp 172.27.228.231:8441: connect: connection refused"
	Apr 28 23:50:35 functional-285400 kubelet[5373]: E0428 23:50:35.964192    5373 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 15m13.591410216s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.373991    5373 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.382692    5373 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.382723    5373 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.383021    5373 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.384439    5373 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.384472    5373 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.384600    5373 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.384865    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.384951    5373 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.384997    5373 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: I0428 23:50:38.385015    5373 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.385594    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.385694    5373 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.386999    5373 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.387295    5373 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 28 23:50:38 functional-285400 kubelet[5373]: E0428 23:50:38.387730    5373 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:48:32.853173    7808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0428 16:49:37.947117    7808 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:49:37.991832    7808 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:49:38.028038    7808 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:49:38.067081    7808 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:49:38.107035    7808 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:49:38.146036    7808 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:49:38.187785    7808 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0428 16:49:38.226914    7808 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.45/containers/json?all=1&filters=%7B%22name%22%3A%7B%22k8s_storage-provisioner%22%3Atrue%7D%7D": dial unix /var/run/docker.sock: connect: permission denied

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-285400 -n functional-285400: exit status 2 (11.9331264s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:50:38.828884   10792 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-285400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (154.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (45.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 image ls --format short --alsologtostderr: (45.7532247s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-285400 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-285400 image ls --format short --alsologtostderr:
W0428 16:57:55.058240     788 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0428 16:57:55.064385     788 out.go:291] Setting OutFile to fd 692 ...
I0428 16:57:55.065858     788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 16:57:55.065858     788 out.go:304] Setting ErrFile to fd 1012...
I0428 16:57:55.065858     788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 16:57:55.082980     788 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0428 16:57:55.083659     788 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0428 16:57:55.084412     788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
I0428 16:57:57.142092     788 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 16:57:57.142092     788 main.go:141] libmachine: [stderr =====>] : 
I0428 16:57:57.155782     788 ssh_runner.go:195] Run: systemctl --version
I0428 16:57:57.155782     788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
I0428 16:57:59.215465     788 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 16:57:59.215465     788 main.go:141] libmachine: [stderr =====>] : 
I0428 16:57:59.216368     788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
I0428 16:58:01.782115     788 main.go:141] libmachine: [stdout =====>] : 172.27.228.231

                                                
                                                
I0428 16:58:01.782115     788 main.go:141] libmachine: [stderr =====>] : 
I0428 16:58:01.782115     788 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
I0428 16:58:01.885840     788 ssh_runner.go:235] Completed: systemctl --version: (4.7300506s)
I0428 16:58:01.895297     788 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0428 16:58:40.667540     788 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (38.7721076s)
W0428 16:58:40.667692     788 cache_images.go:715] Failed to list images for profile functional-285400 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (45.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (47.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 image ls --format table --alsologtostderr: (47.7313651s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-285400 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-285400 image ls --format table --alsologtostderr:
W0428 17:01:54.168123   12096 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0428 17:01:54.175502   12096 out.go:291] Setting OutFile to fd 692 ...
I0428 17:01:54.190919   12096 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 17:01:54.190919   12096 out.go:304] Setting ErrFile to fd 1244...
I0428 17:01:54.190996   12096 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 17:01:54.206865   12096 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0428 17:01:54.208021   12096 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0428 17:01:54.208592   12096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
I0428 17:01:56.233689   12096 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:01:56.233689   12096 main.go:141] libmachine: [stderr =====>] : 
I0428 17:01:56.246763   12096 ssh_runner.go:195] Run: systemctl --version
I0428 17:01:56.246763   12096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
I0428 17:01:58.274253   12096 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:01:58.274319   12096 main.go:141] libmachine: [stderr =====>] : 
I0428 17:01:58.274405   12096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
I0428 17:02:00.721116   12096 main.go:141] libmachine: [stdout =====>] : 172.27.228.231

                                                
                                                
I0428 17:02:00.721908   12096 main.go:141] libmachine: [stderr =====>] : 
I0428 17:02:00.722081   12096 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
I0428 17:02:00.827664   12096 ssh_runner.go:235] Completed: systemctl --version: (4.5808938s)
I0428 17:02:00.837298   12096 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0428 17:02:41.724269   12096 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (40.8869074s)
W0428 17:02:41.724269   12096 cache_images.go:715] Failed to list images for profile functional-285400 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (47.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (60.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 image ls --format json --alsologtostderr: (1m0.2915509s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-285400 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-285400 image ls --format json --alsologtostderr:
W0428 17:01:41.597022   11792 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0428 17:01:41.605477   11792 out.go:291] Setting OutFile to fd 1044 ...
I0428 17:01:41.606261   11792 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 17:01:41.606261   11792 out.go:304] Setting ErrFile to fd 1032...
I0428 17:01:41.606261   11792 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 17:01:41.627187   11792 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0428 17:01:41.627187   11792 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0428 17:01:41.628926   11792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
I0428 17:01:43.836513   11792 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:01:43.836513   11792 main.go:141] libmachine: [stderr =====>] : 
I0428 17:01:43.849842   11792 ssh_runner.go:195] Run: systemctl --version
I0428 17:01:43.849842   11792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
I0428 17:01:46.088691   11792 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:01:46.088784   11792 main.go:141] libmachine: [stderr =====>] : 
I0428 17:01:46.088884   11792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
I0428 17:01:48.835820   11792 main.go:141] libmachine: [stdout =====>] : 172.27.228.231

                                                
                                                
I0428 17:01:48.835949   11792 main.go:141] libmachine: [stderr =====>] : 
I0428 17:01:48.836276   11792 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
I0428 17:01:48.944699   11792 ssh_runner.go:235] Completed: systemctl --version: (5.0948497s)
I0428 17:01:48.956088   11792 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0428 17:02:41.722277   11792 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (52.7661063s)
W0428 17:02:41.722277   11792 cache_images.go:715] Failed to list images for profile functional-285400 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (60.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (60.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 image ls --format yaml --alsologtostderr: (1m0.2737234s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-285400 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-285400 image ls --format yaml --alsologtostderr:
W0428 16:58:40.838258    5292 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0428 16:58:40.846255    5292 out.go:291] Setting OutFile to fd 1012 ...
I0428 16:58:40.847256    5292 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 16:58:40.847256    5292 out.go:304] Setting ErrFile to fd 1096...
I0428 16:58:40.847256    5292 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 16:58:40.866261    5292 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0428 16:58:40.866261    5292 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0428 16:58:40.867251    5292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
I0428 16:58:43.016491    5292 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 16:58:43.016491    5292 main.go:141] libmachine: [stderr =====>] : 
I0428 16:58:43.030685    5292 ssh_runner.go:195] Run: systemctl --version
I0428 16:58:43.031330    5292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
I0428 16:58:45.170476    5292 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 16:58:45.170476    5292 main.go:141] libmachine: [stderr =====>] : 
I0428 16:58:45.171186    5292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
I0428 16:58:47.709525    5292 main.go:141] libmachine: [stdout =====>] : 172.27.228.231

                                                
                                                
I0428 16:58:47.709525    5292 main.go:141] libmachine: [stderr =====>] : 
I0428 16:58:47.710355    5292 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
I0428 16:58:47.813388    5292 ssh_runner.go:235] Completed: systemctl --version: (4.7825239s)
I0428 16:58:47.833737    5292 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0428 16:59:40.942926    5292 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (53.1091093s)
W0428 16:59:40.942926    5292 cache_images.go:715] Failed to list images for profile functional-285400 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (60.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (120.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-285400 ssh pgrep buildkitd: exit status 1 (8.7549079s)

                                                
                                                
** stderr ** 
	W0428 16:59:41.103155   11712 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 image build -t localhost/my-image:functional-285400 testdata\build --alsologtostderr
E0428 17:00:36.428864    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 image build -t localhost/my-image:functional-285400 testdata\build --alsologtostderr: (51.4857538s)
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-285400 image build -t localhost/my-image:functional-285400 testdata\build --alsologtostderr:
W0428 16:59:49.848644    3944 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0428 16:59:49.856141    3944 out.go:291] Setting OutFile to fd 1460 ...
I0428 16:59:49.874458    3944 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 16:59:49.874458    3944 out.go:304] Setting ErrFile to fd 1172...
I0428 16:59:49.874458    3944 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 16:59:49.890127    3944 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0428 16:59:49.910367    3944 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0428 16:59:49.911406    3944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
I0428 16:59:51.940013    3944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 16:59:51.940013    3944 main.go:141] libmachine: [stderr =====>] : 
I0428 16:59:51.952734    3944 ssh_runner.go:195] Run: systemctl --version
I0428 16:59:51.952734    3944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
I0428 16:59:54.036305    3944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 16:59:54.036305    3944 main.go:141] libmachine: [stderr =====>] : 
I0428 16:59:54.036305    3944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
I0428 16:59:56.465106    3944 main.go:141] libmachine: [stdout =====>] : 172.27.228.231

                                                
                                                
I0428 16:59:56.465106    3944 main.go:141] libmachine: [stderr =====>] : 
I0428 16:59:56.465106    3944 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
I0428 16:59:56.569187    3944 ssh_runner.go:235] Completed: systemctl --version: (4.616446s)
I0428 16:59:56.569261    3944 build_images.go:161] Building image from path: C:\Users\jenkins.minikube1\AppData\Local\Temp\build.581695731.tar
I0428 16:59:56.582198    3944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0428 16:59:56.616301    3944 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.581695731.tar
I0428 16:59:56.624291    3944 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.581695731.tar: stat -c "%s %y" /var/lib/minikube/build/build.581695731.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.581695731.tar': No such file or directory
I0428 16:59:56.624508    3944 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\AppData\Local\Temp\build.581695731.tar --> /var/lib/minikube/build/build.581695731.tar (3072 bytes)
I0428 16:59:56.689494    3944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.581695731
I0428 16:59:56.724626    3944 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.581695731 -xf /var/lib/minikube/build/build.581695731.tar
I0428 16:59:56.743076    3944 docker.go:360] Building image: /var/lib/minikube/build/build.581695731
I0428 16:59:56.756712    3944 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-285400 /var/lib/minikube/build/build.581695731
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0428 17:00:41.184876    3944 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-285400 /var/lib/minikube/build/build.581695731: (44.4280628s)
W0428 17:00:41.185369    3944 build_images.go:125] Failed to build image for profile functional-285400. make sure the profile is running. Docker build /var/lib/minikube/build/build.581695731.tar: buildimage docker: docker build -t localhost/my-image:functional-285400 /var/lib/minikube/build/build.581695731: Process exited with status 1
stdout:

                                                
                                                
stderr:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0428 17:00:41.185698    3944 build_images.go:133] succeeded building to: 
I0428 17:00:41.185827    3944 build_images.go:134] failed building to: functional-285400
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 image ls: (1m0.2518262s)
functional_test.go:442: expected "localhost/my-image:functional-285400" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (120.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (78.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 image load --daemon gcr.io/google-containers/addon-resizer:functional-285400 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 image load --daemon gcr.io/google-containers/addon-resizer:functional-285400 --alsologtostderr: (18.0515877s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 image ls: (1m0.208883s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-285400" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (78.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (120.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 image load --daemon gcr.io/google-containers/addon-resizer:functional-285400 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 image load --daemon gcr.io/google-containers/addon-resizer:functional-285400 --alsologtostderr: (1m0.4602796s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 image ls: (1m0.3128353s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-285400" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (120.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (7.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-285400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-285400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: W0428 16:50:50.791764    5584 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0428 16:50:50.808915    5584 out.go:291] Setting OutFile to fd 1172 ...
I0428 16:50:50.822491    5584 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 16:50:50.822558    5584 out.go:304] Setting ErrFile to fd 1224...
I0428 16:50:50.822580    5584 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 16:50:50.853717    5584 mustload.go:65] Loading cluster: functional-285400
I0428 16:50:50.855083    5584 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0428 16:50:50.856368    5584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
I0428 16:50:53.149977    5584 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 16:50:53.150043    5584 main.go:141] libmachine: [stderr =====>] : 
I0428 16:50:53.150043    5584 host.go:66] Checking if "functional-285400" exists ...
I0428 16:50:53.151020    5584 api_server.go:166] Checking apiserver status ...
I0428 16:50:53.168660    5584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0428 16:50:53.168861    5584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-285400 ).state
I0428 16:50:55.558951    5584 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 16:50:55.559021    5584 main.go:141] libmachine: [stderr =====>] : 
I0428 16:50:55.559021    5584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-285400 ).networkadapters[0]).ipaddresses[0]
I0428 16:50:58.135380    5584 main.go:141] libmachine: [stdout =====>] : 172.27.228.231

                                                
                                                
I0428 16:50:58.135486    5584 main.go:141] libmachine: [stderr =====>] : 
I0428 16:50:58.135679    5584 sshutil.go:53] new ssh client: &{IP:172.27.228.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-285400\id_rsa Username:docker}
I0428 16:50:58.249726    5584 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.0809796s)
W0428 16:50:58.249726    5584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0428 16:50:58.253902    5584 out.go:177] * The control-plane node functional-285400 apiserver is not running: (state=Stopped)
I0428 16:50:58.256941    5584 out.go:177]   To start a cluster, run: "minikube start -p functional-285400"

                                                
                                                
stdout: * The control-plane node functional-285400 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-285400"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-285400 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-285400 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-285400 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-285400 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 14896: OpenProcess: The parameter is incorrect.
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-285400 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-285400 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (7.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (4.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-285400 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-285400 apply -f testdata\testsvc.yaml: exit status 1 (4.2165962s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\testsvc.yaml": error validating data: failed to download openapi: Get "https://172.27.228.231:8441/openapi/v2?timeout=32s": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-285400 apply -f testdata\testsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (451.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-285400 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-285400"
functional_test.go:495: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-285400 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-285400": exit status 1 (7m31.2877174s)

                                                
                                                
** stderr ** 
	W0428 16:51:11.330047   10152 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_DOCKER_SCRIPT: Error generating set output: write /dev/stdout: The pipe is being closed.
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_docker-env_537a21c2b5fd267f2de7cb94375503777973e7dd_1.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	E0428 16:58:40.804260   10152 out.go:190] Fprintf failed: write /dev/stdout: The pipe is being closed.

                                                
                                                
** /stderr **
functional_test.go:498: failed to run the command by deadline. exceeded timeout. powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-285400 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-285400"
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/powershell (451.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (120.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.48438s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-285400
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 image load --daemon gcr.io/google-containers/addon-resizer:functional-285400 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 image load --daemon gcr.io/google-containers/addon-resizer:functional-285400 --alsologtostderr: (56.5475452s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 image ls: (1m0.128834s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-285400" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (120.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-285400 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1435: (dbg) Non-zero exit: kubectl --context functional-285400 create deployment hello-node --image=registry.k8s.io/echoserver:1.8: exit status 1 (2.1693167s)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://172.27.228.231:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 172.27.228.231:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-285400 create deployment hello-node --image=registry.k8s.io/echoserver:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (6.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-285400 service list: exit status 103 (6.714224s)

                                                
                                                
-- stdout --
	* The control-plane node functional-285400 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-285400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:52:00.511293   10036 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1457: failed to do service list. args "out/minikube-windows-amd64.exe -p functional-285400 service list" : exit status 103
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-285400 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-285400\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (6.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (6.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-285400 service list -o json: exit status 103 (6.7182605s)

                                                
                                                
-- stdout --
	* The control-plane node functional-285400 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-285400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:52:07.210295    6904 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1487: failed to list services with json format. args "out/minikube-windows-amd64.exe -p functional-285400 service list -o json": exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (6.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (6.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-285400 service --namespace=default --https --url hello-node: exit status 103 (6.7452329s)

                                                
                                                
-- stdout --
	* The control-plane node functional-285400 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-285400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:52:13.940624   10864 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-285400 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (6.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (6.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-285400 service hello-node --url --format={{.IP}}: exit status 103 (6.7737098s)

                                                
                                                
-- stdout --
	* The control-plane node functional-285400 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-285400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:52:20.675101    4976 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-285400 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1544: "* The control-plane node functional-285400 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-285400\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (6.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (6.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-285400 service hello-node --url: exit status 103 (6.7022969s)

                                                
                                                
-- stdout --
	* The control-plane node functional-285400 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-285400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:52:27.450913    2260 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-285400 service hello-node --url": exit status 103
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-285400 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-285400"
functional_test.go:1565: failed to parse "* The control-plane node functional-285400 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-285400\"": parse "* The control-plane node functional-285400 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-285400\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (6.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (60.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 image save gcr.io/google-containers/addon-resizer:functional-285400 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
E0428 16:53:39.578765    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 image save gcr.io/google-containers/addon-resizer:functional-285400 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (1m0.2930063s)
functional_test.go:385: expected "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (60.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-285400 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: exit status 80 (357.6517ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:56:40.290927    4344 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0428 16:56:40.298589    4344 out.go:291] Setting OutFile to fd 1396 ...
	I0428 16:56:40.320246    4344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:56:40.320246    4344 out.go:304] Setting ErrFile to fd 1400...
	I0428 16:56:40.320246    4344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:56:40.339750    4344 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 16:56:40.340492    4344 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\C_\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	I0428 16:56:40.464715    4344 cache.go:107] acquiring lock: {Name:mkab46ddc60376e757ce246e29c34f8da22864bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 16:56:40.469059    4344 cache.go:96] cache image "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar" took 128.5662ms
	I0428 16:56:40.474423    4344 out.go:177] 
	W0428 16:56:40.476772    4344 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	W0428 16:56:40.476827    4344 out.go:239] * 
	* 
	W0428 16:56:40.501647    4344 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_image_ba8447ad4ea3263c86e5e46d2d2f1fdaf7731ef8_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_image_ba8447ad4ea3263c86e5e46d2d2f1fdaf7731ef8_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 16:56:40.504248    4344 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:410: loading image into minikube from file: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:56:40.290927    4344 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0428 16:56:40.298589    4344 out.go:291] Setting OutFile to fd 1396 ...
	I0428 16:56:40.320246    4344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:56:40.320246    4344 out.go:304] Setting ErrFile to fd 1400...
	I0428 16:56:40.320246    4344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:56:40.339750    4344 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 16:56:40.340492    4344 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\C_\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	I0428 16:56:40.464715    4344 cache.go:107] acquiring lock: {Name:mkab46ddc60376e757ce246e29c34f8da22864bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 16:56:40.469059    4344 cache.go:96] cache image "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar" took 128.5662ms
	I0428 16:56:40.474423    4344 out.go:177] 
	W0428 16:56:40.476772    4344 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	W0428 16:56:40.476827    4344 out.go:239] * 
	* 
	W0428 16:56:40.501647    4344 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_image_ba8447ad4ea3263c86e5e46d2d2f1fdaf7731ef8_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_image_ba8447ad4ea3263c86e5e46d2d2f1fdaf7731ef8_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 16:56:40.504248    4344 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (441.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-267500 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0428 17:05:36.418727    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 17:08:41.002894    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:08:41.017788    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:08:41.032738    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:08:41.066100    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:08:41.113850    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:08:41.209227    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:08:41.382695    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:08:41.714772    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:08:42.365829    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:08:43.657983    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:08:46.232301    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:08:51.354570    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:09:01.606057    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:09:22.088971    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:10:03.056144    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:10:19.582369    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 17:10:36.426547    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 17:11:24.978564    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p ha-267500 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: exit status 90 (6m48.380329s)

                                                
                                                
-- stdout --
	* [ha-267500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17977
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "ha-267500" primary control-plane node in "ha-267500" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting "ha-267500-m02" control-plane node in "ha-267500" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=172.27.226.61
	  - NO_PROXY=172.27.226.61
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:05:00.634876   15128 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0428 17:05:00.635889   15128 out.go:291] Setting OutFile to fd 1448 ...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.636883   15128 out.go:304] Setting ErrFile to fd 980...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.660527   15128 out.go:298] Setting JSON to false
	I0428 17:05:00.664060   15128 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6543,"bootTime":1714342556,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 17:05:00.664060   15128 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 17:05:00.669160   15128 out.go:177] * [ha-267500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 17:05:00.673143   15128 notify.go:220] Checking for updates...
	I0428 17:05:00.675298   15128 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:05:00.677914   15128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 17:05:00.680526   15128 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 17:05:00.682871   15128 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 17:05:00.686326   15128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 17:05:00.689521   15128 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 17:05:05.728109   15128 out.go:177] * Using the hyperv driver based on user configuration
	I0428 17:05:05.733726   15128 start.go:297] selected driver: hyperv
	I0428 17:05:05.733726   15128 start.go:901] validating driver "hyperv" against <nil>
	I0428 17:05:05.733888   15128 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 17:05:05.779166   15128 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 17:05:05.780739   15128 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 17:05:05.780739   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:05:05.780739   15128 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0428 17:05:05.780739   15128 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0428 17:05:05.780739   15128 start.go:340] cluster config:
	{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:05:05.781443   15128 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 17:05:05.786272   15128 out.go:177] * Starting "ha-267500" primary control-plane node in "ha-267500" cluster
	I0428 17:05:05.789365   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:05:05.790343   15128 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 17:05:05.790343   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:05:05.790810   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:05:05.791000   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:05:05.791210   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:05:05.791210   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json: {Name:mk9d04dce876aeea74569e2a12d8158542a180a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:360] acquireMachinesLock for ha-267500: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500"
	I0428 17:05:05.793473   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:05:05.793473   15128 start.go:125] createHost starting for "" (driver="hyperv")
	I0428 17:05:05.798458   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:05:05.798458   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:05:05.799075   15128 client.go:168] LocalClient.Create starting
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:05:07.765342   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:05:07.765366   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:07.765483   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:05:09.466609   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:10.942750   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:14.309202   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:05:14.797607   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: Creating VM...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:17.596457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:17.596534   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:17.596629   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:05:17.596740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:19.370912   15128 main.go:141] libmachine: Creating VHD
	I0428 17:05:19.370912   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:05:22.987163   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6323F08D-1941-41F6-AECD-59FDB38477C4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:05:22.987787   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:22.987787   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:05:22.987950   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:05:22.997062   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:05:26.067081   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:26.067395   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:26.067482   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -SizeBytes 20000MB
	I0428 17:05:28.607147   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-267500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:32.186340   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500 -DynamicMemoryEnabled $false
	I0428 17:05:34.304828   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500 -Count 2
	I0428 17:05:36.364288   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:36.365155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:36.365244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\boot2docker.iso'
	I0428 17:05:38.788294   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd'
	I0428 17:05:41.250474   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: Starting VM...
	I0428 17:05:41.251660   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:48.796976   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:48.797051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:49.812421   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:51.911514   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:51.912240   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:51.912333   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:54.389553   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:54.389603   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:55.396985   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:57.532241   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:59.865311   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:59.865354   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:00.867371   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:06.311485   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:10.915736   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:10.916779   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:10.916848   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:12.945722   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:14.977649   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:17.403860   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:17.413822   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:17.413822   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:06:17.548827   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:06:17.549001   15128 buildroot.go:166] provisioning hostname "ha-267500"
	I0428 17:06:17.549001   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:21.963707   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:21.963891   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:21.969614   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:21.970234   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:21.970287   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500 && echo "ha-267500" | sudo tee /etc/hostname
	I0428 17:06:22.125673   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500
	
	I0428 17:06:22.125673   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:24.116148   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:26.498042   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:26.498298   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:26.504621   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:26.505426   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:26.505426   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:06:26.654593   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:06:26.654745   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:06:26.654745   15128 buildroot.go:174] setting up certificates
	I0428 17:06:26.654878   15128 provision.go:84] configureAuth start
	I0428 17:06:26.654974   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:28.643033   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:31.047712   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:33.032385   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:33.033114   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:33.033244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:35.470487   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:35.470551   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:35.470602   15128 provision.go:143] copyHostCerts
	I0428 17:06:35.470602   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:06:35.470602   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:06:35.470602   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:06:35.471409   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:06:35.472302   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:06:35.472302   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:06:35.474368   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:06:35.475508   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:06:35.477084   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500 san=[127.0.0.1 172.27.226.61 ha-267500 localhost minikube]
	I0428 17:06:35.561808   15128 provision.go:177] copyRemoteCerts
	I0428 17:06:35.577487   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:06:35.577487   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:37.564802   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:40.009619   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:06:40.122812   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5453174s)
	I0428 17:06:40.122812   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:06:40.124516   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:06:40.170921   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:06:40.171551   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0428 17:06:40.219603   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:06:40.219603   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:06:40.266084   15128 provision.go:87] duration metric: took 13.6111193s to configureAuth
	I0428 17:06:40.266084   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:06:40.266857   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:06:40.267021   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:42.241914   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:44.637923   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:44.637923   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:44.637923   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:06:44.774113   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:06:44.774113   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:06:44.774113   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:06:44.774650   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:46.777708   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:46.778317   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:46.778401   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:49.187437   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:49.187970   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:49.188102   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:06:49.338418   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:06:49.339201   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:51.331459   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:53.762358   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:53.763024   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:53.763024   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:06:55.964469   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:06:55.964469   15128 machine.go:97] duration metric: took 43.0186778s to provisionDockerMachine
	I0428 17:06:55.964469   15128 client.go:171] duration metric: took 1m50.1652174s to LocalClient.Create
	I0428 17:06:55.964469   15128 start.go:167] duration metric: took 1m50.1658343s to libmachine.API.Create "ha-267500"
	I0428 17:06:55.965115   15128 start.go:293] postStartSetup for "ha-267500" (driver="hyperv")
	I0428 17:06:55.965216   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:06:55.979546   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:06:55.979546   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:57.968316   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:57.969137   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:57.969264   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:00.415449   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:00.415502   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:00.415502   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:00.529139   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5495858s)
	I0428 17:07:00.542143   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:07:00.550032   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:07:00.550213   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:07:00.550570   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:07:00.551284   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:07:00.551284   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:07:00.565509   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:07:00.584743   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:07:00.629457   15128 start.go:296] duration metric: took 4.6642336s for postStartSetup
	I0428 17:07:00.635014   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:02.626728   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:02.627487   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:02.627874   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:05.092989   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:05.093104   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:05.093386   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:07:05.096398   15128 start.go:128] duration metric: took 1m59.3027333s to createHost
	I0428 17:07:05.096398   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:07.065139   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:07.066155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:07.066393   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:09.551453   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:09.552365   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:09.558305   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:09.559011   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:09.559011   15128 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0428 17:07:09.695211   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349229.688972111
	
	I0428 17:07:09.695211   15128 fix.go:216] guest clock: 1714349229.688972111
	I0428 17:07:09.695293   15128 fix.go:229] Guest: 2024-04-28 17:07:09.688972111 -0700 PDT Remote: 2024-04-28 17:07:05.096398 -0700 PDT m=+124.563135001 (delta=4.592574111s)
	I0428 17:07:09.695407   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:11.789797   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:11.789847   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:11.789990   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:14.240619   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:14.240815   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:14.240815   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349229
	I0428 17:07:14.381527   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:07:09 UTC 2024
	
	I0428 17:07:14.381591   15128 fix.go:236] clock set: Mon Apr 29 00:07:09 UTC 2024
	 (err=<nil>)
	I0428 17:07:14.381591   15128 start.go:83] releasing machines lock for "ha-267500", held for 2m8.5881066s
	I0428 17:07:14.381888   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:16.379116   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:18.842518   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:07:18.842698   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:18.852567   15128 ssh_runner.go:195] Run: cat /version.json
	I0428 17:07:18.853571   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.911012   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:20.912913   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.913115   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.913211   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:23.515321   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.515423   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.515870   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.545848   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: cat /version.json: (4.8814384s)
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8914872s)
	I0428 17:07:23.747746   15128 ssh_runner.go:195] Run: systemctl --version
	I0428 17:07:23.771255   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 17:07:23.781524   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:07:23.793701   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:07:23.822613   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:07:23.822613   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:23.822613   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:23.866813   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:07:23.903238   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:07:23.922743   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:07:23.934150   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:07:23.963653   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:23.994818   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:07:24.027248   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:24.060207   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:07:24.094263   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:07:24.140407   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:07:24.173847   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:07:24.204942   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:07:24.241686   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:07:24.271540   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:24.469049   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:07:24.498779   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:24.511314   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:07:24.547731   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.585442   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:07:24.632453   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.665555   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.704256   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:07:24.766295   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.792824   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:24.839067   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:07:24.857950   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:07:24.877113   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:07:24.928235   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:07:25.145493   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:07:25.342459   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:07:25.342632   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:07:25.392872   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:25.606530   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:28.159251   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5517925s)
	I0428 17:07:28.171034   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0428 17:07:28.211210   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.251460   15128 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0428 17:07:28.457673   15128 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0428 17:07:28.655447   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:28.858401   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0428 17:07:28.905418   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.943568   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:29.150079   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0428 17:07:29.264527   15128 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0428 17:07:29.277774   15128 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0428 17:07:29.287734   15128 start.go:562] Will wait 60s for crictl version
	I0428 17:07:29.298726   15128 ssh_runner.go:195] Run: which crictl
	I0428 17:07:29.316760   15128 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 17:07:29.366950   15128 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0428 17:07:29.376977   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.418646   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.453698   15128 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0428 17:07:29.453698   15128 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d9:46:b5 Flags:up|broadcast|multicast|running}
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: fe80::ddcc:c7f1:f829:ae2f/64
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: 172.27.224.1/20
	I0428 17:07:29.473489   15128 ssh_runner.go:195] Run: grep 172.27.224.1	host.minikube.internal$ /etc/hosts
	I0428 17:07:29.479885   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:29.514603   15128 kubeadm.go:877] updating cluster {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 17:07:29.514603   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:07:29.523620   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:29.550369   15128 docker.go:685] Got preloaded images: 
	I0428 17:07:29.550483   15128 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0428 17:07:29.562702   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:29.593952   15128 ssh_runner.go:195] Run: which lz4
	I0428 17:07:29.600117   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0428 17:07:29.613555   15128 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0428 17:07:29.619890   15128 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0428 17:07:29.619890   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0428 17:07:31.519069   15128 docker.go:649] duration metric: took 1.9189486s to copy over tarball
	I0428 17:07:31.533069   15128 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0428 17:07:40.472773   15128 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9396898s)
	I0428 17:07:40.472925   15128 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0428 17:07:40.541351   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:40.567273   15128 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0428 17:07:40.619221   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:40.837523   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:44.196770   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3592418s)
	I0428 17:07:44.207767   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:44.237423   15128 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0428 17:07:44.237484   15128 cache_images.go:84] Images are preloaded, skipping loading
	I0428 17:07:44.237484   15128 kubeadm.go:928] updating node { 172.27.226.61 8443 v1.30.0 docker true true} ...
	I0428 17:07:44.237484   15128 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-267500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.226.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 17:07:44.246763   15128 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0428 17:07:44.282127   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:07:44.282216   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:07:44.282216   15128 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 17:07:44.282351   15128 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.226.61 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-267500 NodeName:ha-267500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.226.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.226.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 17:07:44.282455   15128 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.226.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-267500"
	  kubeletExtraArgs:
	    node-ip: 172.27.226.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.226.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 17:07:44.282455   15128 kube-vip.go:111] generating kube-vip config ...
	I0428 17:07:44.297487   15128 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0428 17:07:44.321501   15128 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0428 17:07:44.322489   15128 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0428 17:07:44.337281   15128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 17:07:44.356448   15128 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 17:07:44.368828   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0428 17:07:44.388733   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0428 17:07:44.419285   15128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 17:07:44.454529   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0428 17:07:44.492910   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0428 17:07:44.535119   15128 ssh_runner.go:195] Run: grep 172.27.239.254	control-plane.minikube.internal$ /etc/hosts
	I0428 17:07:44.544353   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:44.584071   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:44.784658   15128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 17:07:44.813138   15128 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500 for IP: 172.27.226.61
	I0428 17:07:44.813138   15128 certs.go:194] generating shared ca certs ...
	I0428 17:07:44.813138   15128 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:44.814022   15128 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0428 17:07:44.814402   15128 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0428 17:07:44.814630   15128 certs.go:256] generating profile certs ...
	I0428 17:07:44.815376   15128 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key
	I0428 17:07:44.815452   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt with IP's: []
	I0428 17:07:45.207682   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt ...
	I0428 17:07:45.207682   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt: {Name:mkad69168dad75f83e0efa34e0b67056be851f25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.209661   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key ...
	I0428 17:07:45.209661   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key: {Name:mkb880ba41d02f89477ac0bc036a3238bb214c19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.210642   15128 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3
	I0428 17:07:45.211691   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.226.61 172.27.239.254]
	I0428 17:07:45.272240   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 ...
	I0428 17:07:45.272240   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3: {Name:mk99fb8942eac42f7e59971118a5e983aa693542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.273362   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 ...
	I0428 17:07:45.273362   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3: {Name:mkdcebf54b68db40ea28398d3bc9d7030e2380c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.274711   15128 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt
	I0428 17:07:45.286842   15128 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key
	I0428 17:07:45.287930   15128 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key
	I0428 17:07:45.288916   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt with IP's: []
	I0428 17:07:45.392345   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt ...
	I0428 17:07:45.392345   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt: {Name:mk043c6e778c0a46cac3b2815bc508f265aae077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.394630   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key ...
	I0428 17:07:45.394630   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key: {Name:mk9cbeba2bc7745cd3561dc98b61ab1be7e0e2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.395971   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 17:07:45.396701   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 17:07:45.396840   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 17:07:45.396982   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 17:07:45.397123   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 17:07:45.404414   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 17:07:45.405312   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem (1338 bytes)
	W0428 17:07:45.405975   15128 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228_empty.pem, impossibly tiny 0 bytes
	I0428 17:07:45.406015   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0428 17:07:45.406268   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0428 17:07:45.406623   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0428 17:07:45.406886   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0428 17:07:45.407157   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem (1708 bytes)
	I0428 17:07:45.407157   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /usr/share/ca-certificates/32282.pem
	I0428 17:07:45.407872   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:45.408049   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem -> /usr/share/ca-certificates/3228.pem
	I0428 17:07:45.408290   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 17:07:45.465598   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0428 17:07:45.514624   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 17:07:45.563309   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 17:07:45.610689   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0428 17:07:45.668205   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0428 17:07:45.709224   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 17:07:45.760227   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0428 17:07:45.808948   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /usr/share/ca-certificates/32282.pem (1708 bytes)
	I0428 17:07:45.867908   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 17:07:45.915616   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem --> /usr/share/ca-certificates/3228.pem (1338 bytes)
	I0428 17:07:45.964791   15128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 17:07:46.023214   15128 ssh_runner.go:195] Run: openssl version
	I0428 17:07:46.048823   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32282.pem && ln -fs /usr/share/ca-certificates/32282.pem /etc/ssl/certs/32282.pem"
	I0428 17:07:46.088573   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.097176   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.109096   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.132635   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32282.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 17:07:46.166258   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 17:07:46.204585   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.212881   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.228291   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.251359   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 17:07:46.286250   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3228.pem && ln -fs /usr/share/ca-certificates/3228.pem /etc/ssl/certs/3228.pem"
	I0428 17:07:46.330437   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.337213   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.348616   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.369695   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3228.pem /etc/ssl/certs/51391683.0"
	I0428 17:07:46.404629   15128 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 17:07:46.416103   15128 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 17:07:46.416103   15128 kubeadm.go:391] StartCluster: {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:07:46.427776   15128 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 17:07:46.462126   15128 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 17:07:46.492998   15128 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 17:07:46.525017   15128 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 17:07:46.543389   15128 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 17:07:46.543449   15128 kubeadm.go:156] found existing configuration files:
	
	I0428 17:07:46.559558   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 17:07:46.576906   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 17:07:46.591547   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 17:07:46.622617   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 17:07:46.643274   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 17:07:46.657479   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 17:07:46.687575   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.704724   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 17:07:46.717169   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.749254   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 17:07:46.767247   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 17:07:46.779268   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 17:07:46.798138   15128 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0428 17:07:47.295492   15128 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0428 17:08:03.206037   15128 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0428 17:08:03.206217   15128 kubeadm.go:309] [preflight] Running pre-flight checks
	I0428 17:08:03.206547   15128 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0428 17:08:03.206720   15128 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0428 17:08:03.207017   15128 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0428 17:08:03.207166   15128 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 17:08:03.211078   15128 out.go:204]   - Generating certificates and keys ...
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0428 17:08:03.212047   15128 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0428 17:08:03.212253   15128 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0428 17:08:03.212452   15128 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.213396   15128 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 17:08:03.214403   15128 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 17:08:03.214647   15128 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 17:08:03.217496   15128 out.go:204]   - Booting up control plane ...
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 17:08:03.218523   15128 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 17:08:03.218673   15128 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0428 17:08:03.218845   15128 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002004724s
	I0428 17:08:03.219380   15128 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0428 17:08:03.219512   15128 kubeadm.go:309] [api-check] The API server is healthy after 9.018382318s
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0428 17:08:03.219547   15128 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0428 17:08:03.219547   15128 kubeadm.go:309] [mark-control-plane] Marking the node ha-267500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0428 17:08:03.219547   15128 kubeadm.go:309] [bootstrap-token] Using token: o2t0fz.gqoxv8rhmbtgnafl
	I0428 17:08:03.222077   15128 out.go:204]   - Configuring RBAC rules ...
	I0428 17:08:03.223255   15128 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0428 17:08:03.223390   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0428 17:08:03.223700   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0428 17:08:03.224022   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0428 17:08:03.224356   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0428 17:08:03.224673   15128 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0428 17:08:03.224822   15128 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0428 17:08:03.224822   15128 kubeadm.go:309] 
	I0428 17:08:03.224822   15128 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0428 17:08:03.225393   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.226084   15128 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0428 17:08:03.226084   15128 kubeadm.go:309] 
	I0428 17:08:03.226252   15128 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0428 17:08:03.226279   15128 kubeadm.go:309] 
	I0428 17:08:03.226368   15128 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0428 17:08:03.226368   15128 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0428 17:08:03.226368   15128 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0428 17:08:03.226368   15128 kubeadm.go:309] 
	I0428 17:08:03.226941   15128 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0428 17:08:03.227102   15128 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0428 17:08:03.227102   15128 kubeadm.go:309] 
	I0428 17:08:03.227370   15128 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--control-plane 
	I0428 17:08:03.227509   15128 kubeadm.go:309] 
	I0428 17:08:03.227814   15128 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0428 17:08:03.227814   15128 kubeadm.go:309] 
	I0428 17:08:03.228020   15128 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.228020   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c 
	I0428 17:08:03.228020   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:08:03.228020   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:08:03.230920   15128 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0428 17:08:03.245586   15128 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0428 17:08:03.254991   15128 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0428 17:08:03.255049   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0428 17:08:03.307618   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0428 17:08:04.087321   15128 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 17:08:04.101185   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-267500 minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=ha-267500 minikube.k8s.io/primary=true
	I0428 17:08:04.110392   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.127454   15128 ops.go:34] apiserver oom_adj: -16
	I0428 17:08:04.338961   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.853452   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.339051   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.843300   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.345394   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.842588   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.347466   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.845426   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.343954   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.844666   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.346016   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.847106   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.346157   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.852073   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.350599   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.851124   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.339498   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.839469   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.341674   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.844363   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.340478   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.840892   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.351020   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.542789   15128 kubeadm.go:1107] duration metric: took 11.4553488s to wait for elevateKubeSystemPrivileges
	W0428 17:08:15.542884   15128 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0428 17:08:15.542948   15128 kubeadm.go:393] duration metric: took 29.1267984s to StartCluster
	I0428 17:08:15.542948   15128 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.543147   15128 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:15.545087   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.546714   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0428 17:08:15.546792   15128 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:15.546862   15128 start.go:240] waiting for startup goroutines ...
	I0428 17:08:15.546921   15128 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0428 17:08:15.547043   15128 addons.go:69] Setting storage-provisioner=true in profile "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:234] Setting addon storage-provisioner=true in "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:69] Setting default-storageclass=true in profile "ha-267500"
	I0428 17:08:15.547186   15128 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-267500"
	I0428 17:08:15.547186   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:15.547418   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.760123   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0428 17:08:16.117515   15128 start.go:946] {"host.minikube.internal": 172.27.224.1} host record injected into CoreDNS's ConfigMap
	I0428 17:08:17.727218   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.731020   15128 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 17:08:17.728718   15128 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:17.731866   15128 kapi.go:59] client config for ha-267500: &rest.Config{Host:"https://172.27.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 17:08:17.733765   15128 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:17.733849   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0428 17:08:17.733849   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:17.735131   15128 cert_rotation.go:137] Starting client certificate rotation controller
	I0428 17:08:17.735131   15128 addons.go:234] Setting addon default-storageclass=true in "ha-267500"
	I0428 17:08:17.735756   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:17.736495   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.022150   15128 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:20.022150   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.024648   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.176019   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:22.176993   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.177104   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.649653   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:22.838833   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:23.942043   15128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1032083s)
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:24.736869   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:24.878922   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:25.036824   15128 round_trippers.go:463] GET https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0428 17:08:25.036824   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.036824   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.036824   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.047850   15128 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0428 17:08:25.050270   15128 round_trippers.go:463] PUT https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0428 17:08:25.050270   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Content-Type: application/json
	I0428 17:08:25.050270   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.054895   15128 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 17:08:25.058644   15128 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0428 17:08:25.062323   15128 addons.go:505] duration metric: took 9.5154456s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0428 17:08:25.062323   15128 start.go:245] waiting for cluster config update ...
	I0428 17:08:25.062323   15128 start.go:254] writing updated cluster config ...
	I0428 17:08:25.064855   15128 out.go:177] 
	I0428 17:08:25.074876   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:25.074876   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.081680   15128 out.go:177] * Starting "ha-267500-m02" control-plane node in "ha-267500" cluster
	I0428 17:08:25.084831   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:08:25.084949   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:08:25.085245   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:08:25.085467   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:08:25.085668   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.089909   15128 start.go:360] acquireMachinesLock for ha-267500-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:08:25.089909   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500-m02"
	I0428 17:08:25.089909   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:25.089909   15128 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0428 17:08:25.092669   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:08:25.092669   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:08:25.092669   15128 client.go:168] LocalClient.Create starting
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:08:26.932082   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:08:26.932249   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:26.932469   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:08:28.625007   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:08:28.625741   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:28.625836   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:30.145128   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:30.145193   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:30.145352   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:33.641047   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:33.641341   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:33.643919   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:08:34.107074   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:08:34.283136   15128 main.go:141] libmachine: Creating VM...
	I0428 17:08:34.284168   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:37.085226   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:37.085497   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:38.799740   15128 main.go:141] libmachine: Creating VHD
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1C4811B2-F108-4C17-8C85-240087500FFB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:08:42.443176   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:08:45.530814   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -SizeBytes 20000MB
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:08:51.507051   15128 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-267500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:08:51.507121   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:51.507184   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500-m02 -DynamicMemoryEnabled $false
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:53.623959   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500-m02 -Count 2
	I0428 17:08:55.746706   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:55.747282   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:55.747376   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\boot2docker.iso'
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:58.231298   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd'
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: Starting VM...
	I0428 17:09:00.819246   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500-m02
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:08.535107   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:08.535676   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:09.540110   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:11.730252   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:11.730767   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:11.730896   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:14.267320   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:14.267920   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:15.278102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:17.429662   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:19.872667   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:19.873239   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:20.874059   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:23.049283   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:25.483021   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:25.483840   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:26.497330   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:28.593193   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:31.092830   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:33.155893   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:33.156190   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:33.156190   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:09:33.156343   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:37.708958   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:37.709094   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:37.715262   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:37.715453   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:37.715453   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:09:37.838307   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:09:37.838307   15128 buildroot.go:166] provisioning hostname "ha-267500-m02"
	I0428 17:09:37.838307   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:39.845337   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:39.845507   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:39.845582   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:42.372033   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:42.372654   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:42.379934   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:42.380083   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:42.380083   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500-m02 && echo "ha-267500-m02" | sudo tee /etc/hostname
	I0428 17:09:42.534583   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500-m02
	
	I0428 17:09:42.534727   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:44.674240   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:47.257595   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:47.258189   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:47.258189   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:09:47.404787   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:09:47.404787   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:09:47.404787   15128 buildroot.go:174] setting up certificates
	I0428 17:09:47.404787   15128 provision.go:84] configureAuth start
	I0428 17:09:47.404787   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:51.875853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:53.926853   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:53.927030   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:53.927102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:56.411706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:56.412682   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:56.412682   15128 provision.go:143] copyHostCerts
	I0428 17:09:56.412881   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:09:56.413201   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:09:56.413201   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:09:56.413699   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:09:56.414916   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:09:56.415172   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:09:56.417043   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:09:56.417043   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:09:56.417043   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:09:56.417691   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:09:56.418448   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500-m02 san=[127.0.0.1 172.27.238.86 ha-267500-m02 localhost minikube]
	I0428 17:09:56.698158   15128 provision.go:177] copyRemoteCerts
	I0428 17:09:56.713232   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:09:56.713232   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:58.727438   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:58.728437   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:58.728572   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:01.200219   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:01.303703   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5904121s)
	I0428 17:10:01.303703   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:10:01.304216   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:10:01.351115   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:10:01.351613   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0428 17:10:01.399941   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:10:01.400279   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:10:01.447643   15128 provision.go:87] duration metric: took 14.0428334s to configureAuth
	I0428 17:10:01.447643   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:10:01.448198   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:10:01.448388   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:03.470041   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:05.925618   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:05.926194   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:05.926194   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:10:06.056503   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:10:06.056605   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:10:06.056795   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:10:06.056855   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:08.084596   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:10.593844   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:10.594210   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:10.600708   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:10.601470   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:10.601470   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.226.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:10:10.751881   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.226.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:10:10.751947   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:12.904363   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:15.479691   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:15.479915   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:15.486849   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:15.487030   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:15.487030   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:10:17.663081   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:10:17.663081   15128 machine.go:97] duration metric: took 44.506824s to provisionDockerMachine
	I0428 17:10:17.663081   15128 client.go:171] duration metric: took 1m52.570239s to LocalClient.Create
	I0428 17:10:17.663081   15128 start.go:167] duration metric: took 1m52.570239s to libmachine.API.Create "ha-267500"
	I0428 17:10:17.663081   15128 start.go:293] postStartSetup for "ha-267500-m02" (driver="hyperv")
	I0428 17:10:17.663081   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:10:17.677002   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:10:17.677002   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:19.758853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:22.318985   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:22.423330   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7463207s)
	I0428 17:10:22.436053   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:10:22.443505   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:10:22.443505   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:10:22.444052   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:10:22.445207   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:10:22.445207   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:10:22.458722   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:10:22.477786   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:10:22.526087   15128 start.go:296] duration metric: took 4.8629979s for postStartSetup
	I0428 17:10:22.528901   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:27.084100   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:10:27.086385   15128 start.go:128] duration metric: took 2m1.9962875s to createHost
	I0428 17:10:27.086385   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:29.131174   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:31.572065   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:31.572369   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:31.578077   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:31.578656   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:31.578656   15128 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0428 17:10:31.707789   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349431.710726684
	
	I0428 17:10:31.707789   15128 fix.go:216] guest clock: 1714349431.710726684
	I0428 17:10:31.707789   15128 fix.go:229] Guest: 2024-04-28 17:10:31.710726684 -0700 PDT Remote: 2024-04-28 17:10:27.0863856 -0700 PDT m=+326.552805801 (delta=4.624341084s)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:36.218864   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:36.219399   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:36.219663   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349431
	I0428 17:10:36.353520   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:10:31 UTC 2024
	
	I0428 17:10:36.353602   15128 fix.go:236] clock set: Mon Apr 29 00:10:31 UTC 2024
	 (err=<nil>)
	I0428 17:10:36.353602   15128 start.go:83] releasing machines lock for "ha-267500-m02", held for 2m11.26349s
	I0428 17:10:36.353795   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:38.401891   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:40.883767   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:40.883929   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:40.887007   15128 out.go:177] * Found network options:
	I0428 17:10:40.889514   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.892316   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.894427   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.897007   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 17:10:40.898142   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.900035   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:10:40.900035   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:40.912127   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 17:10:40.913152   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:43.021173   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.602076   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.622078   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.622258   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.622506   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.694842   15128 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7816825s)
	W0428 17:10:45.694980   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:10:45.707857   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:10:45.811368   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:10:45.811368   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:45.811368   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.911325s)
	I0428 17:10:45.811813   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:45.869634   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:10:45.905032   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:10:45.930324   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:10:45.946027   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:10:45.978279   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.013710   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:10:46.061695   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.102008   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:10:46.135573   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:10:46.171642   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:10:46.204807   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:10:46.239021   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:10:46.271655   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:10:46.306942   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:46.514038   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:10:46.544941   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:46.560491   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:10:46.605547   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.654104   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:10:46.708544   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.748048   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.784762   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:10:46.849187   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.873497   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:46.927545   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:10:46.944545   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:10:46.962213   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:10:47.010730   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:10:47.237397   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:10:47.429784   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:10:47.429870   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:10:47.474822   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:47.662962   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:11:48.797471   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1344114s)
	I0428 17:11:48.811984   15128 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0428 17:11:48.846867   15128 out.go:177] 
	W0428 17:11:48.851004   15128 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 00:10:16 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.119534579Z" level=info msg="Starting up"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.120740894Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.121661806Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.164120251Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189883081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189945482Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190009182Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190026683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190220685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190263486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190520589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190669591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190716191Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190728492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190839193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.191192898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194247737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194367638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194558841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194663742Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194795944Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195368451Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195462552Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220446573Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220530874Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220815977Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220940379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220961379Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221231583Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221822990Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222033793Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222143394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222181895Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222200695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222229595Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222251396Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222320897Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222367097Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222383497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222398798Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222414398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222438198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222458898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222474399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222508799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222524499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222540899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222555500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222572000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222588200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222612300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222628301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222643801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222659801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222679401Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222703802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222745302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222782703Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222911604Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222975905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222992605Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223005105Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223156807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223197908Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223212708Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229340687Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229588390Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.230467901Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.231131810Z" level=info msg="containerd successfully booted in 0.070317s"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.196765446Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.225741894Z" level=info msg="Loading containers: start."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.520224287Z" level=info msg="Loading containers: done."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.548826467Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.549157372Z" level=info msg="Daemon has completed initialization"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663745997Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663852398Z" level=info msg="API listen on [::]:2376"
	Apr 29 00:10:17 ha-267500-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 00:10:47 ha-267500-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.694032846Z" level=info msg="Processing signal 'terminated'"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696514258Z" level=info msg="Daemon shutdown complete"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696708859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696755859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696775959Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:48 ha-267500-m02 dockerd[1016]: time="2024-04-29T00:10:48.770678285Z" level=info msg="Starting up"
	Apr 29 00:11:48 ha-267500-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 00:10:16 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.119534579Z" level=info msg="Starting up"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.120740894Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.121661806Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.164120251Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189883081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189945482Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190009182Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190026683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190220685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190263486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190520589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190669591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190716191Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190728492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190839193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.191192898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194247737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194367638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194558841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194663742Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194795944Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195368451Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195462552Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220446573Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220530874Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220815977Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220940379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220961379Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221231583Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221822990Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222033793Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222143394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222181895Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222200695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222229595Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222251396Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222320897Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222367097Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222383497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222398798Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222414398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222438198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222458898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222474399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222508799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222524499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222540899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222555500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222572000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222588200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222612300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222628301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222643801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222659801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222679401Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222703802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222745302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222782703Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222911604Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222975905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222992605Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223005105Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223156807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223197908Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223212708Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229340687Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229588390Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.230467901Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.231131810Z" level=info msg="containerd successfully booted in 0.070317s"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.196765446Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.225741894Z" level=info msg="Loading containers: start."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.520224287Z" level=info msg="Loading containers: done."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.548826467Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.549157372Z" level=info msg="Daemon has completed initialization"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663745997Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663852398Z" level=info msg="API listen on [::]:2376"
	Apr 29 00:10:17 ha-267500-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 00:10:47 ha-267500-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.694032846Z" level=info msg="Processing signal 'terminated'"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696514258Z" level=info msg="Daemon shutdown complete"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696708859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696755859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696775959Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:48 ha-267500-m02 dockerd[1016]: time="2024-04-29T00:10:48.770678285Z" level=info msg="Starting up"
	Apr 29 00:11:48 ha-267500-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0428 17:11:48.851004   15128 out.go:239] * 
	* 
	W0428 17:11:48.852842   15128 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 17:11:48.855427   15128 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-windows-amd64.exe start -p ha-267500 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500: (11.8375019s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-267500 logs -n 25: (8.2509168s)
helpers_test.go:252: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh            | functional-285400 ssh sudo cat                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:51 PDT | 28 Apr 24 16:51 PDT |
	|                | /etc/ssl/certs/3ec20f2e.0                                               |                   |                   |         |                     |                     |
	| service        | functional-285400 service list                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	| service        | functional-285400 service list                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	|                | -o json                                                                 |                   |                   |         |                     |                     |
	| service        | functional-285400 service                                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	|                | --namespace=default --https                                             |                   |                   |         |                     |                     |
	|                | --url hello-node                                                        |                   |                   |         |                     |                     |
	| service        | functional-285400                                                       | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	|                | service hello-node --url                                                |                   |                   |         |                     |                     |
	|                | --format={{.IP}}                                                        |                   |                   |         |                     |                     |
	| service        | functional-285400 service                                               | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT |                     |
	|                | hello-node --url                                                        |                   |                   |         |                     |                     |
	| image          | functional-285400 image ls                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:52 PDT | 28 Apr 24 16:53 PDT |
	| image          | functional-285400 image save                                            | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:53 PDT | 28 Apr 24 16:54 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-285400                |                   |                   |         |                     |                     |
	|                | C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| image          | functional-285400 image rm                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:54 PDT | 28 Apr 24 16:55 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-285400                |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| image          | functional-285400 image ls                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:55 PDT | 28 Apr 24 16:56 PDT |
	| image          | functional-285400 image load                                            | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:56 PDT |                     |
	|                | C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| image          | functional-285400 image save --daemon                                   | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:56 PDT | 28 Apr 24 16:57 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-285400                |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| ssh            | functional-285400 ssh sudo cat                                          | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:56 PDT | 28 Apr 24 16:57 PDT |
	|                | /etc/test/nested/copy/3228/hosts                                        |                   |                   |         |                     |                     |
	| update-context | functional-285400                                                       | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:57 PDT | 28 Apr 24 16:57 PDT |
	|                | update-context                                                          |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |                   |         |                     |                     |
	| update-context | functional-285400                                                       | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:57 PDT | 28 Apr 24 16:57 PDT |
	|                | update-context                                                          |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |                   |         |                     |                     |
	| update-context | functional-285400                                                       | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:57 PDT | 28 Apr 24 16:57 PDT |
	|                | update-context                                                          |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |                   |         |                     |                     |
	| image          | functional-285400                                                       | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:57 PDT | 28 Apr 24 16:58 PDT |
	|                | image ls --format short                                                 |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| image          | functional-285400                                                       | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:58 PDT | 28 Apr 24 16:59 PDT |
	|                | image ls --format yaml                                                  |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| ssh            | functional-285400 ssh pgrep                                             | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:59 PDT |                     |
	|                | buildkitd                                                               |                   |                   |         |                     |                     |
	| image          | functional-285400 image build -t                                        | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:59 PDT | 28 Apr 24 17:00 PDT |
	|                | localhost/my-image:functional-285400                                    |                   |                   |         |                     |                     |
	|                | testdata\build --alsologtostderr                                        |                   |                   |         |                     |                     |
	| image          | functional-285400 image ls                                              | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:00 PDT | 28 Apr 24 17:01 PDT |
	| image          | functional-285400                                                       | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:01 PDT |                     |
	|                | image ls --format json                                                  |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| image          | functional-285400                                                       | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:01 PDT | 28 Apr 24 17:02 PDT |
	|                | image ls --format table                                                 |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| delete         | -p functional-285400                                                    | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:03 PDT | 28 Apr 24 17:05 PDT |
	| start          | -p ha-267500 --wait=true                                                | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:05 PDT |                     |
	|                | --memory=2200 --ha                                                      |                   |                   |         |                     |                     |
	|                | -v=7 --alsologtostderr                                                  |                   |                   |         |                     |                     |
	|                | --driver=hyperv                                                         |                   |                   |         |                     |                     |
	|----------------|-------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 17:05:00
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 17:05:00.635889   15128 out.go:291] Setting OutFile to fd 1448 ...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.636883   15128 out.go:304] Setting ErrFile to fd 980...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.660527   15128 out.go:298] Setting JSON to false
	I0428 17:05:00.664060   15128 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6543,"bootTime":1714342556,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 17:05:00.664060   15128 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 17:05:00.669160   15128 out.go:177] * [ha-267500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 17:05:00.673143   15128 notify.go:220] Checking for updates...
	I0428 17:05:00.675298   15128 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:05:00.677914   15128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 17:05:00.680526   15128 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 17:05:00.682871   15128 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 17:05:00.686326   15128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 17:05:00.689521   15128 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 17:05:05.728109   15128 out.go:177] * Using the hyperv driver based on user configuration
	I0428 17:05:05.733726   15128 start.go:297] selected driver: hyperv
	I0428 17:05:05.733726   15128 start.go:901] validating driver "hyperv" against <nil>
	I0428 17:05:05.733888   15128 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 17:05:05.779166   15128 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 17:05:05.780739   15128 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 17:05:05.780739   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:05:05.780739   15128 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0428 17:05:05.780739   15128 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0428 17:05:05.780739   15128 start.go:340] cluster config:
	{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:05:05.781443   15128 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 17:05:05.786272   15128 out.go:177] * Starting "ha-267500" primary control-plane node in "ha-267500" cluster
	I0428 17:05:05.789365   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:05:05.790343   15128 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 17:05:05.790343   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:05:05.790810   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:05:05.791000   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:05:05.791210   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:05:05.791210   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json: {Name:mk9d04dce876aeea74569e2a12d8158542a180a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:360] acquireMachinesLock for ha-267500: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500"
	I0428 17:05:05.793473   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:05:05.793473   15128 start.go:125] createHost starting for "" (driver="hyperv")
	I0428 17:05:05.798458   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:05:05.798458   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:05:05.799075   15128 client.go:168] LocalClient.Create starting
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:05:07.765342   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:05:07.765366   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:07.765483   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:05:09.466609   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:10.942750   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:14.309202   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:05:14.797607   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: Creating VM...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:17.596457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:17.596534   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:17.596629   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:05:17.596740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:19.370912   15128 main.go:141] libmachine: Creating VHD
	I0428 17:05:19.370912   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:05:22.987163   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6323F08D-1941-41F6-AECD-59FDB38477C4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:05:22.987787   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:22.987787   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:05:22.987950   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:05:22.997062   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:05:26.067081   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:26.067395   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:26.067482   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -SizeBytes 20000MB
	I0428 17:05:28.607147   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-267500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:32.186340   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500 -DynamicMemoryEnabled $false
	I0428 17:05:34.304828   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500 -Count 2
	I0428 17:05:36.364288   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:36.365155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:36.365244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\boot2docker.iso'
	I0428 17:05:38.788294   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd'
	I0428 17:05:41.250474   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: Starting VM...
	I0428 17:05:41.251660   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:48.796976   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:48.797051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:49.812421   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:51.911514   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:51.912240   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:51.912333   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:54.389553   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:54.389603   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:55.396985   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:57.532241   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:59.865311   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:59.865354   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:00.867371   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:06.311485   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:10.915736   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:10.916779   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:10.916848   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:12.945722   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:14.977649   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:17.403860   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:17.413822   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:17.413822   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:06:17.548827   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:06:17.549001   15128 buildroot.go:166] provisioning hostname "ha-267500"
	I0428 17:06:17.549001   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:21.963707   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:21.963891   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:21.969614   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:21.970234   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:21.970287   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500 && echo "ha-267500" | sudo tee /etc/hostname
	I0428 17:06:22.125673   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500
	
	I0428 17:06:22.125673   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:24.116148   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:26.498042   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:26.498298   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:26.504621   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:26.505426   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:26.505426   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:06:26.654593   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:06:26.654745   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:06:26.654745   15128 buildroot.go:174] setting up certificates
	I0428 17:06:26.654878   15128 provision.go:84] configureAuth start
	I0428 17:06:26.654974   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:28.643033   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:31.047712   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:33.032385   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:33.033114   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:33.033244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:35.470487   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:35.470551   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:35.470602   15128 provision.go:143] copyHostCerts
	I0428 17:06:35.470602   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:06:35.470602   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:06:35.470602   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:06:35.471409   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:06:35.472302   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:06:35.472302   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:06:35.474368   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:06:35.475508   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:06:35.477084   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500 san=[127.0.0.1 172.27.226.61 ha-267500 localhost minikube]
	I0428 17:06:35.561808   15128 provision.go:177] copyRemoteCerts
	I0428 17:06:35.577487   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:06:35.577487   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:37.564802   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:40.009619   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:06:40.122812   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5453174s)
	I0428 17:06:40.122812   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:06:40.124516   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:06:40.170921   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:06:40.171551   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0428 17:06:40.219603   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:06:40.219603   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:06:40.266084   15128 provision.go:87] duration metric: took 13.6111193s to configureAuth
	I0428 17:06:40.266084   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:06:40.266857   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:06:40.267021   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:42.241914   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:44.637923   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:44.637923   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:44.637923   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:06:44.774113   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:06:44.774113   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:06:44.774113   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:06:44.774650   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:46.777708   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:46.778317   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:46.778401   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:49.187437   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:49.187970   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:49.188102   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:06:49.338418   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:06:49.339201   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:51.331459   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:53.762358   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:53.763024   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:53.763024   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:06:55.964469   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:06:55.964469   15128 machine.go:97] duration metric: took 43.0186778s to provisionDockerMachine
	I0428 17:06:55.964469   15128 client.go:171] duration metric: took 1m50.1652174s to LocalClient.Create
	I0428 17:06:55.964469   15128 start.go:167] duration metric: took 1m50.1658343s to libmachine.API.Create "ha-267500"
	I0428 17:06:55.965115   15128 start.go:293] postStartSetup for "ha-267500" (driver="hyperv")
	I0428 17:06:55.965216   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:06:55.979546   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:06:55.979546   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:57.968316   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:57.969137   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:57.969264   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:00.415449   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:00.415502   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:00.415502   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:00.529139   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5495858s)
	I0428 17:07:00.542143   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:07:00.550032   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:07:00.550213   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:07:00.550570   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:07:00.551284   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:07:00.551284   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:07:00.565509   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:07:00.584743   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:07:00.629457   15128 start.go:296] duration metric: took 4.6642336s for postStartSetup
	I0428 17:07:00.635014   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:02.626728   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:02.627487   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:02.627874   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:05.092989   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:05.093104   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:05.093386   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:07:05.096398   15128 start.go:128] duration metric: took 1m59.3027333s to createHost
	I0428 17:07:05.096398   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:07.065139   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:07.066155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:07.066393   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:09.551453   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:09.552365   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:09.558305   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:09.559011   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:09.559011   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:07:09.695211   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349229.688972111
	
	I0428 17:07:09.695211   15128 fix.go:216] guest clock: 1714349229.688972111
	I0428 17:07:09.695293   15128 fix.go:229] Guest: 2024-04-28 17:07:09.688972111 -0700 PDT Remote: 2024-04-28 17:07:05.096398 -0700 PDT m=+124.563135001 (delta=4.592574111s)
	I0428 17:07:09.695407   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:11.789797   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:11.789847   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:11.789990   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:14.240619   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:14.240815   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:14.240815   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349229
	I0428 17:07:14.381527   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:07:09 UTC 2024
	
	I0428 17:07:14.381591   15128 fix.go:236] clock set: Mon Apr 29 00:07:09 UTC 2024
	 (err=<nil>)
	I0428 17:07:14.381591   15128 start.go:83] releasing machines lock for "ha-267500", held for 2m8.5881066s
	I0428 17:07:14.381888   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:16.379116   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:18.842518   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:07:18.842698   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:18.852567   15128 ssh_runner.go:195] Run: cat /version.json
	I0428 17:07:18.853571   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.911012   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:20.912913   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.913115   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.913211   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:23.515321   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.515423   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.515870   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.545848   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: cat /version.json: (4.8814384s)
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8914872s)
	I0428 17:07:23.747746   15128 ssh_runner.go:195] Run: systemctl --version
	I0428 17:07:23.771255   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 17:07:23.781524   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:07:23.793701   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:07:23.822613   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:07:23.822613   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:23.822613   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:23.866813   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:07:23.903238   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:07:23.922743   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:07:23.934150   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:07:23.963653   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:23.994818   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:07:24.027248   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:24.060207   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:07:24.094263   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:07:24.140407   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:07:24.173847   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:07:24.204942   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:07:24.241686   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:07:24.271540   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:24.469049   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:07:24.498779   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:24.511314   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:07:24.547731   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.585442   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:07:24.632453   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.665555   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.704256   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:07:24.766295   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.792824   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:24.839067   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:07:24.857950   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:07:24.877113   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:07:24.928235   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:07:25.145493   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:07:25.342459   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:07:25.342632   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:07:25.392872   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:25.606530   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:28.159251   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5517925s)
	I0428 17:07:28.171034   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0428 17:07:28.211210   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.251460   15128 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0428 17:07:28.457673   15128 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0428 17:07:28.655447   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:28.858401   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0428 17:07:28.905418   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.943568   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:29.150079   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0428 17:07:29.264527   15128 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0428 17:07:29.277774   15128 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0428 17:07:29.287734   15128 start.go:562] Will wait 60s for crictl version
	I0428 17:07:29.298726   15128 ssh_runner.go:195] Run: which crictl
	I0428 17:07:29.316760   15128 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 17:07:29.366950   15128 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0428 17:07:29.376977   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.418646   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.453698   15128 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0428 17:07:29.453698   15128 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d9:46:b5 Flags:up|broadcast|multicast|running}
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: fe80::ddcc:c7f1:f829:ae2f/64
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: 172.27.224.1/20
	I0428 17:07:29.473489   15128 ssh_runner.go:195] Run: grep 172.27.224.1	host.minikube.internal$ /etc/hosts
	I0428 17:07:29.479885   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:29.514603   15128 kubeadm.go:877] updating cluster {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 17:07:29.514603   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:07:29.523620   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:29.550369   15128 docker.go:685] Got preloaded images: 
	I0428 17:07:29.550483   15128 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0428 17:07:29.562702   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:29.593952   15128 ssh_runner.go:195] Run: which lz4
	I0428 17:07:29.600117   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0428 17:07:29.613555   15128 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0428 17:07:29.619890   15128 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0428 17:07:29.619890   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0428 17:07:31.519069   15128 docker.go:649] duration metric: took 1.9189486s to copy over tarball
	I0428 17:07:31.533069   15128 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0428 17:07:40.472773   15128 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9396898s)
	I0428 17:07:40.472925   15128 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0428 17:07:40.541351   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:40.567273   15128 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0428 17:07:40.619221   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:40.837523   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:44.196770   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3592418s)
	I0428 17:07:44.207767   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:44.237423   15128 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0428 17:07:44.237484   15128 cache_images.go:84] Images are preloaded, skipping loading
	I0428 17:07:44.237484   15128 kubeadm.go:928] updating node { 172.27.226.61 8443 v1.30.0 docker true true} ...
	I0428 17:07:44.237484   15128 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-267500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.226.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 17:07:44.246763   15128 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0428 17:07:44.282127   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:07:44.282216   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:07:44.282216   15128 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 17:07:44.282351   15128 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.226.61 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-267500 NodeName:ha-267500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.226.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.226.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 17:07:44.282455   15128 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.226.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-267500"
	  kubeletExtraArgs:
	    node-ip: 172.27.226.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.226.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 17:07:44.282455   15128 kube-vip.go:111] generating kube-vip config ...
	I0428 17:07:44.297487   15128 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0428 17:07:44.321501   15128 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0428 17:07:44.322489   15128 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0428 17:07:44.337281   15128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 17:07:44.356448   15128 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 17:07:44.368828   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0428 17:07:44.388733   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0428 17:07:44.419285   15128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 17:07:44.454529   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0428 17:07:44.492910   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0428 17:07:44.535119   15128 ssh_runner.go:195] Run: grep 172.27.239.254	control-plane.minikube.internal$ /etc/hosts
	I0428 17:07:44.544353   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:44.584071   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:44.784658   15128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 17:07:44.813138   15128 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500 for IP: 172.27.226.61
	I0428 17:07:44.813138   15128 certs.go:194] generating shared ca certs ...
	I0428 17:07:44.813138   15128 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:44.814022   15128 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0428 17:07:44.814402   15128 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0428 17:07:44.814630   15128 certs.go:256] generating profile certs ...
	I0428 17:07:44.815376   15128 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key
	I0428 17:07:44.815452   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt with IP's: []
	I0428 17:07:45.207682   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt ...
	I0428 17:07:45.207682   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt: {Name:mkad69168dad75f83e0efa34e0b67056be851f25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.209661   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key ...
	I0428 17:07:45.209661   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key: {Name:mkb880ba41d02f89477ac0bc036a3238bb214c19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.210642   15128 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3
	I0428 17:07:45.211691   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.226.61 172.27.239.254]
	I0428 17:07:45.272240   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 ...
	I0428 17:07:45.272240   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3: {Name:mk99fb8942eac42f7e59971118a5e983aa693542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.273362   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 ...
	I0428 17:07:45.273362   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3: {Name:mkdcebf54b68db40ea28398d3bc9d7030e2380c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.274711   15128 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt
	I0428 17:07:45.286842   15128 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key
	I0428 17:07:45.287930   15128 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key
	I0428 17:07:45.288916   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt with IP's: []
	I0428 17:07:45.392345   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt ...
	I0428 17:07:45.392345   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt: {Name:mk043c6e778c0a46cac3b2815bc508f265aae077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.394630   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key ...
	I0428 17:07:45.394630   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key: {Name:mk9cbeba2bc7745cd3561dc98b61ab1be7e0e2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.395971   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 17:07:45.396701   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 17:07:45.396840   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 17:07:45.396982   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 17:07:45.397123   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 17:07:45.404414   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 17:07:45.405312   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem (1338 bytes)
	W0428 17:07:45.405975   15128 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228_empty.pem, impossibly tiny 0 bytes
	I0428 17:07:45.406015   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0428 17:07:45.406268   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0428 17:07:45.406623   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0428 17:07:45.406886   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0428 17:07:45.407157   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem (1708 bytes)
	I0428 17:07:45.407157   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /usr/share/ca-certificates/32282.pem
	I0428 17:07:45.407872   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:45.408049   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem -> /usr/share/ca-certificates/3228.pem
	I0428 17:07:45.408290   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 17:07:45.465598   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0428 17:07:45.514624   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 17:07:45.563309   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 17:07:45.610689   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0428 17:07:45.668205   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0428 17:07:45.709224   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 17:07:45.760227   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0428 17:07:45.808948   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /usr/share/ca-certificates/32282.pem (1708 bytes)
	I0428 17:07:45.867908   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 17:07:45.915616   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem --> /usr/share/ca-certificates/3228.pem (1338 bytes)
	I0428 17:07:45.964791   15128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 17:07:46.023214   15128 ssh_runner.go:195] Run: openssl version
	I0428 17:07:46.048823   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32282.pem && ln -fs /usr/share/ca-certificates/32282.pem /etc/ssl/certs/32282.pem"
	I0428 17:07:46.088573   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.097176   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.109096   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.132635   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32282.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 17:07:46.166258   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 17:07:46.204585   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.212881   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.228291   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.251359   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 17:07:46.286250   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3228.pem && ln -fs /usr/share/ca-certificates/3228.pem /etc/ssl/certs/3228.pem"
	I0428 17:07:46.330437   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.337213   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.348616   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.369695   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3228.pem /etc/ssl/certs/51391683.0"
	I0428 17:07:46.404629   15128 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 17:07:46.416103   15128 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 17:07:46.416103   15128 kubeadm.go:391] StartCluster: {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:07:46.427776   15128 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 17:07:46.462126   15128 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 17:07:46.492998   15128 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 17:07:46.525017   15128 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 17:07:46.543389   15128 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 17:07:46.543449   15128 kubeadm.go:156] found existing configuration files:
	
	I0428 17:07:46.559558   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 17:07:46.576906   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 17:07:46.591547   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 17:07:46.622617   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 17:07:46.643274   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 17:07:46.657479   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 17:07:46.687575   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.704724   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 17:07:46.717169   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.749254   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 17:07:46.767247   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 17:07:46.779268   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 17:07:46.798138   15128 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0428 17:07:47.295492   15128 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0428 17:08:03.206037   15128 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0428 17:08:03.206217   15128 kubeadm.go:309] [preflight] Running pre-flight checks
	I0428 17:08:03.206547   15128 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0428 17:08:03.206720   15128 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0428 17:08:03.207017   15128 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0428 17:08:03.207166   15128 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 17:08:03.211078   15128 out.go:204]   - Generating certificates and keys ...
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0428 17:08:03.212047   15128 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0428 17:08:03.212253   15128 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0428 17:08:03.212452   15128 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.213396   15128 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 17:08:03.214403   15128 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 17:08:03.214647   15128 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 17:08:03.217496   15128 out.go:204]   - Booting up control plane ...
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 17:08:03.218523   15128 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 17:08:03.218673   15128 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0428 17:08:03.218845   15128 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002004724s
	I0428 17:08:03.219380   15128 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0428 17:08:03.219512   15128 kubeadm.go:309] [api-check] The API server is healthy after 9.018382318s
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0428 17:08:03.219547   15128 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0428 17:08:03.219547   15128 kubeadm.go:309] [mark-control-plane] Marking the node ha-267500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0428 17:08:03.219547   15128 kubeadm.go:309] [bootstrap-token] Using token: o2t0fz.gqoxv8rhmbtgnafl
	I0428 17:08:03.222077   15128 out.go:204]   - Configuring RBAC rules ...
	I0428 17:08:03.223255   15128 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0428 17:08:03.223390   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0428 17:08:03.223700   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0428 17:08:03.224022   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0428 17:08:03.224356   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0428 17:08:03.224673   15128 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0428 17:08:03.224822   15128 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0428 17:08:03.224822   15128 kubeadm.go:309] 
	I0428 17:08:03.224822   15128 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0428 17:08:03.225393   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.226084   15128 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0428 17:08:03.226084   15128 kubeadm.go:309] 
	I0428 17:08:03.226252   15128 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0428 17:08:03.226279   15128 kubeadm.go:309] 
	I0428 17:08:03.226368   15128 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0428 17:08:03.226368   15128 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0428 17:08:03.226368   15128 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0428 17:08:03.226368   15128 kubeadm.go:309] 
	I0428 17:08:03.226941   15128 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0428 17:08:03.227102   15128 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0428 17:08:03.227102   15128 kubeadm.go:309] 
	I0428 17:08:03.227370   15128 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--control-plane 
	I0428 17:08:03.227509   15128 kubeadm.go:309] 
	I0428 17:08:03.227814   15128 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0428 17:08:03.227814   15128 kubeadm.go:309] 
	I0428 17:08:03.228020   15128 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.228020   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c 
	I0428 17:08:03.228020   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:08:03.228020   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:08:03.230920   15128 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0428 17:08:03.245586   15128 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0428 17:08:03.254991   15128 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0428 17:08:03.255049   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0428 17:08:03.307618   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0428 17:08:04.087321   15128 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 17:08:04.101185   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-267500 minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=ha-267500 minikube.k8s.io/primary=true
	I0428 17:08:04.110392   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.127454   15128 ops.go:34] apiserver oom_adj: -16
	I0428 17:08:04.338961   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.853452   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.339051   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.843300   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.345394   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.842588   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.347466   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.845426   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.343954   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.844666   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.346016   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.847106   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.346157   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.852073   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.350599   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.851124   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.339498   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.839469   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.341674   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.844363   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.340478   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.840892   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.351020   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.542789   15128 kubeadm.go:1107] duration metric: took 11.4553488s to wait for elevateKubeSystemPrivileges
	W0428 17:08:15.542884   15128 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0428 17:08:15.542948   15128 kubeadm.go:393] duration metric: took 29.1267984s to StartCluster
	I0428 17:08:15.542948   15128 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.543147   15128 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:15.545087   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.546714   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0428 17:08:15.546792   15128 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:15.546862   15128 start.go:240] waiting for startup goroutines ...
	I0428 17:08:15.546921   15128 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0428 17:08:15.547043   15128 addons.go:69] Setting storage-provisioner=true in profile "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:234] Setting addon storage-provisioner=true in "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:69] Setting default-storageclass=true in profile "ha-267500"
	I0428 17:08:15.547186   15128 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-267500"
	I0428 17:08:15.547186   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:15.547418   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.760123   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0428 17:08:16.117515   15128 start.go:946] {"host.minikube.internal": 172.27.224.1} host record injected into CoreDNS's ConfigMap
	I0428 17:08:17.727218   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.731020   15128 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 17:08:17.728718   15128 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:17.731866   15128 kapi.go:59] client config for ha-267500: &rest.Config{Host:"https://172.27.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 17:08:17.733765   15128 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:17.733849   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0428 17:08:17.733849   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:17.735131   15128 cert_rotation.go:137] Starting client certificate rotation controller
	I0428 17:08:17.735131   15128 addons.go:234] Setting addon default-storageclass=true in "ha-267500"
	I0428 17:08:17.735756   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:17.736495   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.022150   15128 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:20.022150   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.024648   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.176019   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:22.176993   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.177104   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.649653   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:22.838833   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:23.942043   15128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1032083s)
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:24.736869   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:24.878922   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:25.036824   15128 round_trippers.go:463] GET https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0428 17:08:25.036824   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.036824   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.036824   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.047850   15128 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0428 17:08:25.050270   15128 round_trippers.go:463] PUT https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0428 17:08:25.050270   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Content-Type: application/json
	I0428 17:08:25.050270   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.054895   15128 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 17:08:25.058644   15128 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0428 17:08:25.062323   15128 addons.go:505] duration metric: took 9.5154456s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0428 17:08:25.062323   15128 start.go:245] waiting for cluster config update ...
	I0428 17:08:25.062323   15128 start.go:254] writing updated cluster config ...
	I0428 17:08:25.064855   15128 out.go:177] 
	I0428 17:08:25.074876   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:25.074876   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.081680   15128 out.go:177] * Starting "ha-267500-m02" control-plane node in "ha-267500" cluster
	I0428 17:08:25.084831   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:08:25.084949   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:08:25.085245   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:08:25.085467   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:08:25.085668   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.089909   15128 start.go:360] acquireMachinesLock for ha-267500-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:08:25.089909   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500-m02"
	I0428 17:08:25.089909   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:25.089909   15128 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0428 17:08:25.092669   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:08:25.092669   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:08:25.092669   15128 client.go:168] LocalClient.Create starting
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:08:26.932082   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:08:26.932249   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:26.932469   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:08:28.625007   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:08:28.625741   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:28.625836   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:30.145128   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:30.145193   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:30.145352   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:33.641047   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:33.641341   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:33.643919   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:08:34.107074   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:08:34.283136   15128 main.go:141] libmachine: Creating VM...
	I0428 17:08:34.284168   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:37.085226   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:37.085497   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:38.799740   15128 main.go:141] libmachine: Creating VHD
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1C4811B2-F108-4C17-8C85-240087500FFB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:08:42.443176   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:08:45.530814   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -SizeBytes 20000MB
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:08:51.507051   15128 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-267500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:08:51.507121   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:51.507184   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500-m02 -DynamicMemoryEnabled $false
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:53.623959   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500-m02 -Count 2
	I0428 17:08:55.746706   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:55.747282   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:55.747376   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\boot2docker.iso'
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:58.231298   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd'
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: Starting VM...
	I0428 17:09:00.819246   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500-m02
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:08.535107   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:08.535676   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:09.540110   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:11.730252   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:11.730767   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:11.730896   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:14.267320   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:14.267920   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:15.278102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:17.429662   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:19.872667   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:19.873239   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:20.874059   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:23.049283   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:25.483021   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:25.483840   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:26.497330   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:28.593193   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:31.092830   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:33.155893   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:33.156190   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:33.156190   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:09:33.156343   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:37.708958   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:37.709094   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:37.715262   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:37.715453   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:37.715453   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:09:37.838307   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:09:37.838307   15128 buildroot.go:166] provisioning hostname "ha-267500-m02"
	I0428 17:09:37.838307   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:39.845337   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:39.845507   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:39.845582   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:42.372033   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:42.372654   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:42.379934   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:42.380083   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:42.380083   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500-m02 && echo "ha-267500-m02" | sudo tee /etc/hostname
	I0428 17:09:42.534583   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500-m02
	
	I0428 17:09:42.534727   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:44.674240   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:47.257595   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:47.258189   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:47.258189   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:09:47.404787   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:09:47.404787   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:09:47.404787   15128 buildroot.go:174] setting up certificates
	I0428 17:09:47.404787   15128 provision.go:84] configureAuth start
	I0428 17:09:47.404787   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:51.875853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:53.926853   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:53.927030   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:53.927102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:56.411706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:56.412682   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:56.412682   15128 provision.go:143] copyHostCerts
	I0428 17:09:56.412881   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:09:56.413201   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:09:56.413201   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:09:56.413699   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:09:56.414916   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:09:56.415172   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:09:56.417043   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:09:56.417043   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:09:56.417043   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:09:56.417691   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:09:56.418448   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500-m02 san=[127.0.0.1 172.27.238.86 ha-267500-m02 localhost minikube]
	I0428 17:09:56.698158   15128 provision.go:177] copyRemoteCerts
	I0428 17:09:56.713232   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:09:56.713232   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:58.727438   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:58.728437   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:58.728572   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:01.200219   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:01.303703   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5904121s)
	I0428 17:10:01.303703   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:10:01.304216   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:10:01.351115   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:10:01.351613   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0428 17:10:01.399941   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:10:01.400279   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:10:01.447643   15128 provision.go:87] duration metric: took 14.0428334s to configureAuth
	I0428 17:10:01.447643   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:10:01.448198   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:10:01.448388   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:03.470041   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:05.925618   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:05.926194   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:05.926194   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:10:06.056503   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:10:06.056605   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:10:06.056795   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:10:06.056855   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:08.084596   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:10.593844   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:10.594210   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:10.600708   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:10.601470   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:10.601470   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.226.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:10:10.751881   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.226.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:10:10.751947   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:12.904363   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:15.479691   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:15.479915   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:15.486849   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:15.487030   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:15.487030   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:10:17.663081   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:10:17.663081   15128 machine.go:97] duration metric: took 44.506824s to provisionDockerMachine
	I0428 17:10:17.663081   15128 client.go:171] duration metric: took 1m52.570239s to LocalClient.Create
	I0428 17:10:17.663081   15128 start.go:167] duration metric: took 1m52.570239s to libmachine.API.Create "ha-267500"
	I0428 17:10:17.663081   15128 start.go:293] postStartSetup for "ha-267500-m02" (driver="hyperv")
	I0428 17:10:17.663081   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:10:17.677002   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:10:17.677002   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:19.758853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:22.318985   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:22.423330   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7463207s)
	I0428 17:10:22.436053   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:10:22.443505   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:10:22.443505   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:10:22.444052   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:10:22.445207   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:10:22.445207   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:10:22.458722   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:10:22.477786   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:10:22.526087   15128 start.go:296] duration metric: took 4.8629979s for postStartSetup
	I0428 17:10:22.528901   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:27.084100   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:10:27.086385   15128 start.go:128] duration metric: took 2m1.9962875s to createHost
	I0428 17:10:27.086385   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:29.131174   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:31.572065   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:31.572369   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:31.578077   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:31.578656   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:31.578656   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349431.710726684
	
	I0428 17:10:31.707789   15128 fix.go:216] guest clock: 1714349431.710726684
	I0428 17:10:31.707789   15128 fix.go:229] Guest: 2024-04-28 17:10:31.710726684 -0700 PDT Remote: 2024-04-28 17:10:27.0863856 -0700 PDT m=+326.552805801 (delta=4.624341084s)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:36.218864   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:36.219399   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:36.219663   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349431
	I0428 17:10:36.353520   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:10:31 UTC 2024
	
	I0428 17:10:36.353602   15128 fix.go:236] clock set: Mon Apr 29 00:10:31 UTC 2024
	 (err=<nil>)
	I0428 17:10:36.353602   15128 start.go:83] releasing machines lock for "ha-267500-m02", held for 2m11.26349s
	I0428 17:10:36.353795   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:38.401891   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:40.883767   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:40.883929   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:40.887007   15128 out.go:177] * Found network options:
	I0428 17:10:40.889514   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.892316   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.894427   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.897007   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 17:10:40.898142   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.900035   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:10:40.900035   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:40.912127   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 17:10:40.913152   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:43.021173   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.602076   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.622078   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.622258   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.622506   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.694842   15128 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7816825s)
	W0428 17:10:45.694980   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:10:45.707857   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:10:45.811368   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:10:45.811368   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:45.811368   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.911325s)
	I0428 17:10:45.811813   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:45.869634   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:10:45.905032   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:10:45.930324   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:10:45.946027   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:10:45.978279   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.013710   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:10:46.061695   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.102008   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:10:46.135573   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:10:46.171642   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:10:46.204807   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:10:46.239021   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:10:46.271655   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:10:46.306942   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:46.514038   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:10:46.544941   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:46.560491   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:10:46.605547   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.654104   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:10:46.708544   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.748048   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.784762   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:10:46.849187   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.873497   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:46.927545   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:10:46.944545   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:10:46.962213   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:10:47.010730   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:10:47.237397   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:10:47.429784   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:10:47.429870   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:10:47.474822   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:47.662962   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:11:48.797471   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1344114s)
	I0428 17:11:48.811984   15128 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0428 17:11:48.846867   15128 out.go:177] 
	W0428 17:11:48.851004   15128 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 00:10:16 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.119534579Z" level=info msg="Starting up"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.120740894Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.121661806Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.164120251Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189883081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189945482Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190009182Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190026683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190220685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190263486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190520589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190669591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190716191Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190728492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190839193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.191192898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194247737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194367638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194558841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194663742Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194795944Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195368451Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195462552Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220446573Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220530874Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220815977Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220940379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220961379Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221231583Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221822990Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222033793Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222143394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222181895Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222200695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222229595Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222251396Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222320897Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222367097Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222383497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222398798Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222414398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222438198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222458898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222474399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222508799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222524499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222540899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222555500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222572000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222588200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222612300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222628301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222643801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222659801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222679401Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222703802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222745302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222782703Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222911604Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222975905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222992605Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223005105Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223156807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223197908Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223212708Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229340687Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229588390Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.230467901Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.231131810Z" level=info msg="containerd successfully booted in 0.070317s"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.196765446Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.225741894Z" level=info msg="Loading containers: start."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.520224287Z" level=info msg="Loading containers: done."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.548826467Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.549157372Z" level=info msg="Daemon has completed initialization"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663745997Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663852398Z" level=info msg="API listen on [::]:2376"
	Apr 29 00:10:17 ha-267500-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 00:10:47 ha-267500-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.694032846Z" level=info msg="Processing signal 'terminated'"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696514258Z" level=info msg="Daemon shutdown complete"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696708859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696755859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696775959Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:48 ha-267500-m02 dockerd[1016]: time="2024-04-29T00:10:48.770678285Z" level=info msg="Starting up"
	Apr 29 00:11:48 ha-267500-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0428 17:11:48.851004   15128 out.go:239] * 
	W0428 17:11:48.852842   15128 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 17:11:48.855427   15128 out.go:177] 
	
	
	==> Docker <==
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.112319038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.112645138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.171413287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.171650087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.171674687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.172385887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.191648503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.192101604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.192254004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.192607904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:08:30 ha-267500 cri-dockerd[1225]: time="2024-04-29T00:08:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4f7c6837c24bd496b2d206c042b287fa458493c3e42236449ac141c747f33a1c/resolv.conf as [nameserver 172.27.224.1]"
	Apr 29 00:08:30 ha-267500 cri-dockerd[1225]: time="2024-04-29T00:08:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/586f91a6b0d3dd98316a49d1eaa889022e8b64afc884a81c6203e058dcfe64b3/resolv.conf as [nameserver 172.27.224.1]"
	Apr 29 00:08:30 ha-267500 cri-dockerd[1225]: time="2024-04-29T00:08:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c1f590ad490fe5dd467eef513f378cba4fcbbb140a0b88249e5ae2161c6ff249/resolv.conf as [nameserver 172.27.224.1]"
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.702598907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.702985608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.703297809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.703858811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.827167345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.828050647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.828282248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.828737849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.969544430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.969810031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.970023532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.971462136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	863860b786b42       cbb01a7bd410d                                                                                       3 minutes ago       Running             coredns                   0                   c1f590ad490fe       coredns-7db6d8ff4d-p7tjz
	f85260746d557       cbb01a7bd410d                                                                                       3 minutes ago       Running             coredns                   0                   586f91a6b0d3d       coredns-7db6d8ff4d-2d6ct
	f23ff280b691c       6e38f40d628db                                                                                       3 minutes ago       Running             storage-provisioner       0                   4f7c6837c24bd       storage-provisioner
	31e97721c439f       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988            3 minutes ago       Running             kindnet-cni               0                   9a810f16fad2b       kindnet-6pr2b
	b505176bff8dd       a0bf559e280cf                                                                                       3 minutes ago       Running             kube-proxy                0                   f041e2ebf6955       kube-proxy-59kz7
	e8de8cc5d0941       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016   4 minutes ago       Running             kube-vip                  0                   5e6adedaca2d1       kube-vip-ha-267500
	1bb77467f58fc       3861cfcd7c04c                                                                                       4 minutes ago       Running             etcd                      0                   bd2f63e7ff884       etcd-ha-267500
	e3f1a76ec8d43       c42f13656d0b2                                                                                       4 minutes ago       Running             kube-apiserver            0                   1aac39df0e147       kube-apiserver-ha-267500
	8e1e8e3ae83a4       259c8277fcbbc                                                                                       4 minutes ago       Running             kube-scheduler            0                   59e9e09e1fe2e       kube-scheduler-ha-267500
	988ba6e93dbd2       c7aad43836fa5                                                                                       4 minutes ago       Running             kube-controller-manager   0                   b062edd237fa4       kube-controller-manager-ha-267500
	
	
	==> coredns [863860b786b4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56042 - 38920 "HINFO IN 6310058863699759000.886894576477842994. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026858243s
	
	
	==> coredns [f85260746d55] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52661 - 10332 "HINFO IN 6890724632724915343.2842102422429648823. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049972505s
	
	
	==> describe nodes <==
	Name:               ha-267500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-267500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-267500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:08:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-267500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:12:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:08:33 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:08:33 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:08:33 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:08:33 +0000   Mon, 29 Apr 2024 00:08:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.226.61
	  Hostname:    ha-267500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 077cacd754b64c3dad0beeef28749850
	  System UUID:                961ce819-6c1b-c24a-99df-3205dca32605
	  Boot ID:                    bb08693c-1f82-4307-a58c-bdcce00f2d7a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2d6ct             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m54s
	  kube-system                 coredns-7db6d8ff4d-p7tjz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m54s
	  kube-system                 etcd-ha-267500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m7s
	  kube-system                 kindnet-6pr2b                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m54s
	  kube-system                 kube-apiserver-ha-267500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-controller-manager-ha-267500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-proxy-59kz7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-scheduler-ha-267500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-vip-ha-267500                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m50s  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m7s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m7s   kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s   kubelet          Node ha-267500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s   kubelet          Node ha-267500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m54s  node-controller  Node ha-267500 event: Registered Node ha-267500 in Controller
	  Normal  NodeReady                3m40s  kubelet          Node ha-267500 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.684590] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[Apr29 00:06] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.760915] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.419480] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.183676] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[Apr29 00:07] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.112445] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.557599] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.220083] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.252325] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.857578] systemd-fstab-generator[1178]: Ignoring "noauto" option for root device
	[  +0.206645] systemd-fstab-generator[1190]: Ignoring "noauto" option for root device
	[  +0.195057] systemd-fstab-generator[1202]: Ignoring "noauto" option for root device
	[  +0.281554] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[ +11.671296] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.127733] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.851029] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +6.965698] systemd-fstab-generator[1723]: Ignoring "noauto" option for root device
	[  +0.101314] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.149606] kauditd_printk_skb: 67 callbacks suppressed
	[Apr29 00:08] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[ +14.798165] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.098725] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [1bb77467f58f] <==
	{"level":"info","ts":"2024-04-29T00:07:55.112039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c914a6e18288a53b became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-29T00:07:55.112084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c914a6e18288a53b received MsgPreVoteResp from c914a6e18288a53b at term 1"}
	{"level":"info","ts":"2024-04-29T00:07:55.112098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c914a6e18288a53b became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T00:07:55.112108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c914a6e18288a53b received MsgVoteResp from c914a6e18288a53b at term 2"}
	{"level":"info","ts":"2024-04-29T00:07:55.112199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c914a6e18288a53b became leader at term 2"}
	{"level":"info","ts":"2024-04-29T00:07:55.112218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c914a6e18288a53b elected leader c914a6e18288a53b at term 2"}
	{"level":"info","ts":"2024-04-29T00:07:55.130032Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:07:55.135448Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:07:55.147187Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T00:07:55.148136Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ff0b9df26eb7be34","local-member-id":"c914a6e18288a53b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:07:55.150079Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:07:55.150305Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:07:55.161955Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:07:55.135369Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c914a6e18288a53b","local-member-attributes":"{Name:ha-267500 ClientURLs:[https://172.27.226.61:2379]}","request-path":"/0/members/c914a6e18288a53b/attributes","cluster-id":"ff0b9df26eb7be34","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T00:07:55.210715Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T00:07:55.218498Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T00:07:55.218434Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.226.61:2379"}
	2024/04/29 00:08:02 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-29T00:08:23.622806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.399264ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11906267438321679815 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" value_size:641 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T00:08:23.623008Z","caller":"traceutil/trace.go:171","msg":"trace[348685892] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"241.172088ms","start":"2024-04-29T00:08:23.381821Z","end":"2024-04-29T00:08:23.622994Z","steps":["trace[348685892] 'process raft request'  (duration: 19.993123ms)","trace[348685892] 'compare'  (duration: 220.074764ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T00:08:23.828397Z","caller":"traceutil/trace.go:171","msg":"trace[1858965510] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"194.829232ms","start":"2024-04-29T00:08:23.633549Z","end":"2024-04-29T00:08:23.828378Z","steps":["trace[1858965510] 'process raft request'  (duration: 188.756825ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:08:35.320112Z","caller":"traceutil/trace.go:171","msg":"trace[1514672440] transaction","detail":"{read_only:false; response_revision:469; number_of_response:1; }","duration":"139.755985ms","start":"2024-04-29T00:08:35.180333Z","end":"2024-04-29T00:08:35.320089Z","steps":["trace[1514672440] 'process raft request'  (duration: 139.641088ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:08:35.326839Z","caller":"traceutil/trace.go:171","msg":"trace[147414067] transaction","detail":"{read_only:false; response_revision:470; number_of_response:1; }","duration":"143.072983ms","start":"2024-04-29T00:08:35.183755Z","end":"2024-04-29T00:08:35.326828Z","steps":["trace[147414067] 'process raft request'  (duration: 142.867489ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:08:38.476431Z","caller":"traceutil/trace.go:171","msg":"trace[1245678643] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"108.116145ms","start":"2024-04-29T00:08:38.368296Z","end":"2024-04-29T00:08:38.476412Z","steps":["trace[1245678643] 'process raft request'  (duration: 108.004749ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:08:39.463077Z","caller":"traceutil/trace.go:171","msg":"trace[192678874] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"113.436313ms","start":"2024-04-29T00:08:39.349621Z","end":"2024-04-29T00:08:39.463057Z","steps":["trace[192678874] 'process raft request'  (duration: 113.028826ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:12:09 up 6 min,  0 users,  load average: 0.58, 0.69, 0.36
	Linux ha-267500 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [31e97721c439] <==
	I0429 00:10:05.095154       1 main.go:227] handling current node
	I0429 00:10:15.100800       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:10:15.100897       1 main.go:227] handling current node
	I0429 00:10:25.106429       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:10:25.106629       1 main.go:227] handling current node
	I0429 00:10:35.120894       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:10:35.121276       1 main.go:227] handling current node
	I0429 00:10:45.136316       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:10:45.136362       1 main.go:227] handling current node
	I0429 00:10:55.150156       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:10:55.150245       1 main.go:227] handling current node
	I0429 00:11:05.155991       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:11:05.156509       1 main.go:227] handling current node
	I0429 00:11:15.171123       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:11:15.171212       1 main.go:227] handling current node
	I0429 00:11:25.176991       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:11:25.177568       1 main.go:227] handling current node
	I0429 00:11:35.194232       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:11:35.194330       1 main.go:227] handling current node
	I0429 00:11:45.211004       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:11:45.212026       1 main.go:227] handling current node
	I0429 00:11:55.225068       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:11:55.225528       1 main.go:227] handling current node
	I0429 00:12:05.231711       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:12:05.231841       1 main.go:227] handling current node
	
	
	==> kube-apiserver [e3f1a76ec8d4] <==
	E0429 00:07:58.394106       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0429 00:07:58.426526       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0429 00:07:58.481103       1 controller.go:615] quota admission added evaluator for: namespaces
	E0429 00:07:58.517467       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0429 00:07:58.600561       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 00:07:59.267127       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0429 00:07:59.274054       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0429 00:07:59.274177       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 00:08:00.391728       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 00:08:00.499756       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 00:08:00.604894       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 00:08:00.617966       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.226.61]
	I0429 00:08:00.619341       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 00:08:00.626826       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 00:08:01.319490       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0429 00:08:02.484116       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0429 00:08:02.484213       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0429 00:08:02.484272       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.5µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0429 00:08:02.485404       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0429 00:08:02.486881       1 timeout.go:142] post-timeout activity - time-elapsed: 2.861712ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0429 00:08:02.642721       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 00:08:02.684736       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 00:08:02.712741       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 00:08:15.229730       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 00:08:15.308254       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [988ba6e93dbd] <==
	I0429 00:08:15.158655       1 shared_informer.go:320] Caches are synced for ephemeral
	I0429 00:08:15.163619       1 shared_informer.go:320] Caches are synced for service account
	I0429 00:08:15.167390       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-267500" podCIDRs=["10.244.0.0/24"]
	I0429 00:08:15.198244       1 shared_informer.go:320] Caches are synced for daemon sets
	I0429 00:08:15.209965       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0429 00:08:15.272889       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0429 00:08:15.285567       1 shared_informer.go:320] Caches are synced for endpoint
	I0429 00:08:15.313045       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 00:08:15.330719       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 00:08:15.755385       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="504.32555ms"
	I0429 00:08:15.767104       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 00:08:15.791065       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.201658ms"
	I0429 00:08:15.793718       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 00:08:15.797689       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 00:08:15.865935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.824341ms"
	I0429 00:08:15.866112       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="110.2µs"
	I0429 00:08:29.407024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="145.8µs"
	I0429 00:08:29.410999       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.9µs"
	I0429 00:08:29.438715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58µs"
	I0429 00:08:29.463289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.4µs"
	I0429 00:08:30.150197       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 00:08:32.178168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.562718ms"
	I0429 00:08:32.178767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="128.296µs"
	I0429 00:08:32.227761       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.198293ms"
	I0429 00:08:32.228518       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.397µs"
	
	
	==> kube-proxy [b505176bff8d] <==
	I0429 00:08:18.378677       1 server_linux.go:69] "Using iptables proxy"
	I0429 00:08:18.445828       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.226.61"]
	I0429 00:08:18.505105       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:08:18.505147       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:08:18.505201       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:08:18.511281       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:08:18.512271       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:08:18.512309       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:08:18.516363       1 config.go:192] "Starting service config controller"
	I0429 00:08:18.517198       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:08:18.517237       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:08:18.517245       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:08:18.524551       1 config.go:319] "Starting node config controller"
	I0429 00:08:18.524570       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:08:18.618172       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 00:08:18.618299       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:08:18.624657       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8e1e8e3ae83a] <==
	W0429 00:07:59.408672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 00:07:59.409434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 00:07:59.614629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.614883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.614630       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.616141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.671538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.671604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.688105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 00:07:59.688348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 00:07:59.699454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 00:07:59.699500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 00:07:59.827114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 00:07:59.827663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 00:07:59.863569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 00:07:59.864226       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 00:07:59.922434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 00:07:59.922488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 00:07:59.934988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 00:07:59.935206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 00:07:59.935823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 00:07:59.936001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 00:07:59.940321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 00:07:59.940831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0429 00:08:01.614591       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 00:08:29 ha-267500 kubelet[2223]: I0429 00:08:29.415710    2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szf9z\" (UniqueName: \"kubernetes.io/projected/cc3610da-f19d-4bfd-88ef-4ea4d2c77c41-kube-api-access-szf9z\") pod \"coredns-7db6d8ff4d-2d6ct\" (UID: \"cc3610da-f19d-4bfd-88ef-4ea4d2c77c41\") " pod="kube-system/coredns-7db6d8ff4d-2d6ct"
	Apr 29 00:08:29 ha-267500 kubelet[2223]: I0429 00:08:29.517113    2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/515de153-8012-4744-8680-eb57fa7cb006-tmp\") pod \"storage-provisioner\" (UID: \"515de153-8012-4744-8680-eb57fa7cb006\") " pod="kube-system/storage-provisioner"
	Apr 29 00:08:29 ha-267500 kubelet[2223]: I0429 00:08:29.517210    2223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx5sh\" (UniqueName: \"kubernetes.io/projected/515de153-8012-4744-8680-eb57fa7cb006-kube-api-access-nx5sh\") pod \"storage-provisioner\" (UID: \"515de153-8012-4744-8680-eb57fa7cb006\") " pod="kube-system/storage-provisioner"
	Apr 29 00:08:32 ha-267500 kubelet[2223]: I0429 00:08:32.155085    2223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=9.154980814 podStartE2EDuration="9.154980814s" podCreationTimestamp="2024-04-29 00:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 00:08:32.129175331 +0000 UTC m=+29.632189364" watchObservedRunningTime="2024-04-29 00:08:32.154980814 +0000 UTC m=+29.657994847"
	Apr 29 00:08:32 ha-267500 kubelet[2223]: I0429 00:08:32.204606    2223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-p7tjz" podStartSLOduration=17.204585244 podStartE2EDuration="17.204585244s" podCreationTimestamp="2024-04-29 00:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 00:08:32.162470377 +0000 UTC m=+29.665484410" watchObservedRunningTime="2024-04-29 00:08:32.204585244 +0000 UTC m=+29.707599177"
	Apr 29 00:09:02 ha-267500 kubelet[2223]: E0429 00:09:02.767274    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:09:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:09:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:09:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:09:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:10:02 ha-267500 kubelet[2223]: E0429 00:10:02.776893    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:10:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:10:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:10:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:10:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:11:02 ha-267500 kubelet[2223]: E0429 00:11:02.768707    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:11:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:11:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:11:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:11:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:12:02 ha-267500 kubelet[2223]: E0429 00:12:02.769806    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:12:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:12:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:12:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:12:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [f23ff280b691] <==
	I0429 00:08:31.052093       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 00:08:31.098437       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 00:08:31.104173       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 00:08:31.136173       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 00:08:31.136819       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-267500_c29b34a4-e5d1-441c-af40-1ba1265b4632!
	I0429 00:08:31.138081       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56b0b310-7342-47c9-9240-aab5b4e4fa99", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-267500_c29b34a4-e5d1-441c-af40-1ba1265b4632 became leader
	I0429 00:08:31.238456       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-267500_c29b34a4-e5d1-441c-af40-1ba1265b4632!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:12:01.457672    9848 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500: (11.6813616s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-267500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StartCluster (441.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (722.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- rollout status deployment/busybox
E0428 17:13:41.000674    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:14:08.819725    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:15:36.429734    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 17:18:41.009780    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:20:36.425056    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
ha_test.go:133: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-267500 -- rollout status deployment/busybox: exit status 1 (10m3.3253052s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 3 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:12:22.878771    2248 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0428 17:22:26.218246    3728 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0428 17:22:27.730993    2548 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0428 17:22:29.821020     772 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0428 17:22:31.386047    7852 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0428 17:22:36.265225    4100 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0428 17:22:42.116186    6216 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0428 17:22:48.676214   10928 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0428 17:23:02.474033    6904 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0428 17:23:17.405823    5112 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
E0428 17:23:41.012236    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0428 17:23:47.219170   15304 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:159: failed to resolve pod IPs: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --\n** stderr ** \n\tW0428 17:23:47.219170   15304 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-5xln2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-5xln2 -- nslookup kubernetes.io: (1.710274s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-jxx6x -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-jxx6x -- nslookup kubernetes.io: exit status 1 (355.9012ms)

                                                
                                                
** stderr ** 
	W0428 17:23:49.655503    7908 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-jxx6x does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-fc5497c4f-jxx6x could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-wg44s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-wg44s -- nslookup kubernetes.io: exit status 1 (348.1045ms)

                                                
                                                
** stderr ** 
	W0428 17:23:50.016495   10368 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-wg44s does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-fc5497c4f-wg44s could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-5xln2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-jxx6x -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-jxx6x -- nslookup kubernetes.default: exit status 1 (364.9652ms)

                                                
                                                
** stderr ** 
	W0428 17:23:50.997896    3824 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-jxx6x does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-fc5497c4f-jxx6x could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-wg44s -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-wg44s -- nslookup kubernetes.default: exit status 1 (343.5904ms)

                                                
                                                
** stderr ** 
	W0428 17:23:51.366027   13464 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-wg44s does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-fc5497c4f-wg44s could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-5xln2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-jxx6x -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-jxx6x -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (357.5458ms)

                                                
                                                
** stderr ** 
	W0428 17:23:52.160510   11740 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-jxx6x does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-fc5497c4f-jxx6x could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-wg44s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-wg44s -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (337.3114ms)

                                                
                                                
** stderr ** 
	W0428 17:23:52.516993    8912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-wg44s does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-fc5497c4f-wg44s could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500: (11.5989619s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-267500 logs -n 25: (7.983687s)
helpers_test.go:252: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-285400                    | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:01 PDT | 28 Apr 24 17:02 PDT |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| delete  | -p functional-285400                 | functional-285400 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:03 PDT | 28 Apr 24 17:05 PDT |
	| start   | -p ha-267500 --wait=true             | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:05 PDT |                     |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- apply -f             | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:12 PDT | 28 Apr 24 17:12 PDT |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- rollout status       | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:12 PDT |                     |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500         | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 17:05:00
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 17:05:00.635889   15128 out.go:291] Setting OutFile to fd 1448 ...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.636883   15128 out.go:304] Setting ErrFile to fd 980...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.660527   15128 out.go:298] Setting JSON to false
	I0428 17:05:00.664060   15128 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6543,"bootTime":1714342556,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 17:05:00.664060   15128 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 17:05:00.669160   15128 out.go:177] * [ha-267500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 17:05:00.673143   15128 notify.go:220] Checking for updates...
	I0428 17:05:00.675298   15128 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:05:00.677914   15128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 17:05:00.680526   15128 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 17:05:00.682871   15128 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 17:05:00.686326   15128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 17:05:00.689521   15128 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 17:05:05.728109   15128 out.go:177] * Using the hyperv driver based on user configuration
	I0428 17:05:05.733726   15128 start.go:297] selected driver: hyperv
	I0428 17:05:05.733726   15128 start.go:901] validating driver "hyperv" against <nil>
	I0428 17:05:05.733888   15128 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 17:05:05.779166   15128 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 17:05:05.780739   15128 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 17:05:05.780739   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:05:05.780739   15128 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0428 17:05:05.780739   15128 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0428 17:05:05.780739   15128 start.go:340] cluster config:
	{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:05:05.781443   15128 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 17:05:05.786272   15128 out.go:177] * Starting "ha-267500" primary control-plane node in "ha-267500" cluster
	I0428 17:05:05.789365   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:05:05.790343   15128 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 17:05:05.790343   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:05:05.790810   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:05:05.791000   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:05:05.791210   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:05:05.791210   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json: {Name:mk9d04dce876aeea74569e2a12d8158542a180a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:360] acquireMachinesLock for ha-267500: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500"
	I0428 17:05:05.793473   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:05:05.793473   15128 start.go:125] createHost starting for "" (driver="hyperv")
	I0428 17:05:05.798458   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:05:05.798458   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:05:05.799075   15128 client.go:168] LocalClient.Create starting
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:05:07.765342   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:05:07.765366   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:07.765483   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:05:09.466609   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:10.942750   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:14.309202   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:05:14.797607   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: Creating VM...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:17.596457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:17.596534   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:17.596629   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:05:17.596740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:19.370912   15128 main.go:141] libmachine: Creating VHD
	I0428 17:05:19.370912   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:05:22.987163   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6323F08D-1941-41F6-AECD-59FDB38477C4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:05:22.987787   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:22.987787   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:05:22.987950   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:05:22.997062   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:05:26.067081   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:26.067395   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:26.067482   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -SizeBytes 20000MB
	I0428 17:05:28.607147   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-267500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:32.186340   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500 -DynamicMemoryEnabled $false
	I0428 17:05:34.304828   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500 -Count 2
	I0428 17:05:36.364288   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:36.365155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:36.365244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\boot2docker.iso'
	I0428 17:05:38.788294   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd'
	I0428 17:05:41.250474   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: Starting VM...
	I0428 17:05:41.251660   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:48.796976   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:48.797051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:49.812421   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:51.911514   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:51.912240   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:51.912333   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:54.389553   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:54.389603   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:55.396985   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:57.532241   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:59.865311   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:59.865354   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:00.867371   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:06.311485   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:10.915736   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:10.916779   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:10.916848   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:12.945722   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:14.977649   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:17.403860   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:17.413822   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:17.413822   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:06:17.548827   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:06:17.549001   15128 buildroot.go:166] provisioning hostname "ha-267500"
	I0428 17:06:17.549001   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:21.963707   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:21.963891   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:21.969614   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:21.970234   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:21.970287   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500 && echo "ha-267500" | sudo tee /etc/hostname
	I0428 17:06:22.125673   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500
	
	I0428 17:06:22.125673   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:24.116148   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:26.498042   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:26.498298   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:26.504621   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:26.505426   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:26.505426   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:06:26.654593   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:06:26.654745   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:06:26.654745   15128 buildroot.go:174] setting up certificates
	I0428 17:06:26.654878   15128 provision.go:84] configureAuth start
	I0428 17:06:26.654974   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:28.643033   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:31.047712   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:33.032385   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:33.033114   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:33.033244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:35.470487   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:35.470551   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:35.470602   15128 provision.go:143] copyHostCerts
	I0428 17:06:35.470602   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:06:35.470602   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:06:35.470602   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:06:35.471409   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:06:35.472302   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:06:35.472302   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:06:35.474368   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:06:35.475508   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:06:35.477084   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500 san=[127.0.0.1 172.27.226.61 ha-267500 localhost minikube]
	I0428 17:06:35.561808   15128 provision.go:177] copyRemoteCerts
	I0428 17:06:35.577487   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:06:35.577487   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:37.564802   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:40.009619   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:06:40.122812   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5453174s)
	I0428 17:06:40.122812   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:06:40.124516   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:06:40.170921   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:06:40.171551   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0428 17:06:40.219603   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:06:40.219603   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:06:40.266084   15128 provision.go:87] duration metric: took 13.6111193s to configureAuth
	I0428 17:06:40.266084   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:06:40.266857   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:06:40.267021   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:42.241914   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:44.637923   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:44.637923   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:44.637923   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:06:44.774113   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:06:44.774113   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:06:44.774113   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:06:44.774650   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:46.777708   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:46.778317   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:46.778401   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:49.187437   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:49.187970   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:49.188102   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:06:49.338418   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:06:49.339201   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:51.331459   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:53.762358   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:53.763024   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:53.763024   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:06:55.964469   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:06:55.964469   15128 machine.go:97] duration metric: took 43.0186778s to provisionDockerMachine
	I0428 17:06:55.964469   15128 client.go:171] duration metric: took 1m50.1652174s to LocalClient.Create
	I0428 17:06:55.964469   15128 start.go:167] duration metric: took 1m50.1658343s to libmachine.API.Create "ha-267500"
	I0428 17:06:55.965115   15128 start.go:293] postStartSetup for "ha-267500" (driver="hyperv")
	I0428 17:06:55.965216   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:06:55.979546   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:06:55.979546   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:57.968316   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:57.969137   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:57.969264   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:00.415449   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:00.415502   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:00.415502   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:00.529139   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5495858s)
	I0428 17:07:00.542143   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:07:00.550032   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:07:00.550213   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:07:00.550570   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:07:00.551284   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:07:00.551284   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:07:00.565509   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:07:00.584743   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:07:00.629457   15128 start.go:296] duration metric: took 4.6642336s for postStartSetup
	I0428 17:07:00.635014   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:02.626728   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:02.627487   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:02.627874   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:05.092989   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:05.093104   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:05.093386   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:07:05.096398   15128 start.go:128] duration metric: took 1m59.3027333s to createHost
	I0428 17:07:05.096398   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:07.065139   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:07.066155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:07.066393   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:09.551453   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:09.552365   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:09.558305   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:09.559011   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:09.559011   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:07:09.695211   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349229.688972111
	
	I0428 17:07:09.695211   15128 fix.go:216] guest clock: 1714349229.688972111
	I0428 17:07:09.695293   15128 fix.go:229] Guest: 2024-04-28 17:07:09.688972111 -0700 PDT Remote: 2024-04-28 17:07:05.096398 -0700 PDT m=+124.563135001 (delta=4.592574111s)
	I0428 17:07:09.695407   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:11.789797   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:11.789847   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:11.789990   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:14.240619   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:14.240815   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:14.240815   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349229
	I0428 17:07:14.381527   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:07:09 UTC 2024
	
	I0428 17:07:14.381591   15128 fix.go:236] clock set: Mon Apr 29 00:07:09 UTC 2024
	 (err=<nil>)
	I0428 17:07:14.381591   15128 start.go:83] releasing machines lock for "ha-267500", held for 2m8.5881066s
	I0428 17:07:14.381888   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:16.379116   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:18.842518   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:07:18.842698   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:18.852567   15128 ssh_runner.go:195] Run: cat /version.json
	I0428 17:07:18.853571   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.911012   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:20.912913   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.913115   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.913211   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:23.515321   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.515423   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.515870   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.545848   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: cat /version.json: (4.8814384s)
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8914872s)
	I0428 17:07:23.747746   15128 ssh_runner.go:195] Run: systemctl --version
	I0428 17:07:23.771255   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 17:07:23.781524   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:07:23.793701   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:07:23.822613   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:07:23.822613   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:23.822613   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:23.866813   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:07:23.903238   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:07:23.922743   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:07:23.934150   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:07:23.963653   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:23.994818   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:07:24.027248   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:24.060207   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:07:24.094263   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:07:24.140407   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:07:24.173847   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:07:24.204942   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:07:24.241686   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:07:24.271540   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:24.469049   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:07:24.498779   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:24.511314   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:07:24.547731   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.585442   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:07:24.632453   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.665555   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.704256   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:07:24.766295   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.792824   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:24.839067   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:07:24.857950   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:07:24.877113   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:07:24.928235   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:07:25.145493   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:07:25.342459   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:07:25.342632   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:07:25.392872   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:25.606530   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:28.159251   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5517925s)
	I0428 17:07:28.171034   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0428 17:07:28.211210   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.251460   15128 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0428 17:07:28.457673   15128 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0428 17:07:28.655447   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:28.858401   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0428 17:07:28.905418   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.943568   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:29.150079   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0428 17:07:29.264527   15128 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0428 17:07:29.277774   15128 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0428 17:07:29.287734   15128 start.go:562] Will wait 60s for crictl version
	I0428 17:07:29.298726   15128 ssh_runner.go:195] Run: which crictl
	I0428 17:07:29.316760   15128 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 17:07:29.366950   15128 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0428 17:07:29.376977   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.418646   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.453698   15128 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0428 17:07:29.453698   15128 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d9:46:b5 Flags:up|broadcast|multicast|running}
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: fe80::ddcc:c7f1:f829:ae2f/64
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: 172.27.224.1/20
	I0428 17:07:29.473489   15128 ssh_runner.go:195] Run: grep 172.27.224.1	host.minikube.internal$ /etc/hosts
	I0428 17:07:29.479885   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:29.514603   15128 kubeadm.go:877] updating cluster {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 17:07:29.514603   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:07:29.523620   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:29.550369   15128 docker.go:685] Got preloaded images: 
	I0428 17:07:29.550483   15128 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0428 17:07:29.562702   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:29.593952   15128 ssh_runner.go:195] Run: which lz4
	I0428 17:07:29.600117   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0428 17:07:29.613555   15128 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0428 17:07:29.619890   15128 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0428 17:07:29.619890   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0428 17:07:31.519069   15128 docker.go:649] duration metric: took 1.9189486s to copy over tarball
	I0428 17:07:31.533069   15128 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0428 17:07:40.472773   15128 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9396898s)
	I0428 17:07:40.472925   15128 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0428 17:07:40.541351   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:40.567273   15128 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0428 17:07:40.619221   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:40.837523   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:44.196770   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3592418s)
	I0428 17:07:44.207767   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:44.237423   15128 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0428 17:07:44.237484   15128 cache_images.go:84] Images are preloaded, skipping loading
	I0428 17:07:44.237484   15128 kubeadm.go:928] updating node { 172.27.226.61 8443 v1.30.0 docker true true} ...
	I0428 17:07:44.237484   15128 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-267500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.226.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 17:07:44.246763   15128 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0428 17:07:44.282127   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:07:44.282216   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:07:44.282216   15128 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 17:07:44.282351   15128 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.226.61 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-267500 NodeName:ha-267500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.226.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.226.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 17:07:44.282455   15128 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.226.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-267500"
	  kubeletExtraArgs:
	    node-ip: 172.27.226.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.226.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 17:07:44.282455   15128 kube-vip.go:111] generating kube-vip config ...
	I0428 17:07:44.297487   15128 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0428 17:07:44.321501   15128 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0428 17:07:44.322489   15128 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0428 17:07:44.337281   15128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 17:07:44.356448   15128 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 17:07:44.368828   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0428 17:07:44.388733   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0428 17:07:44.419285   15128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 17:07:44.454529   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0428 17:07:44.492910   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0428 17:07:44.535119   15128 ssh_runner.go:195] Run: grep 172.27.239.254	control-plane.minikube.internal$ /etc/hosts
	I0428 17:07:44.544353   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:44.584071   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:44.784658   15128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 17:07:44.813138   15128 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500 for IP: 172.27.226.61
	I0428 17:07:44.813138   15128 certs.go:194] generating shared ca certs ...
	I0428 17:07:44.813138   15128 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:44.814022   15128 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0428 17:07:44.814402   15128 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0428 17:07:44.814630   15128 certs.go:256] generating profile certs ...
	I0428 17:07:44.815376   15128 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key
	I0428 17:07:44.815452   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt with IP's: []
	I0428 17:07:45.207682   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt ...
	I0428 17:07:45.207682   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt: {Name:mkad69168dad75f83e0efa34e0b67056be851f25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.209661   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key ...
	I0428 17:07:45.209661   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key: {Name:mkb880ba41d02f89477ac0bc036a3238bb214c19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.210642   15128 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3
	I0428 17:07:45.211691   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.226.61 172.27.239.254]
	I0428 17:07:45.272240   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 ...
	I0428 17:07:45.272240   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3: {Name:mk99fb8942eac42f7e59971118a5e983aa693542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.273362   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 ...
	I0428 17:07:45.273362   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3: {Name:mkdcebf54b68db40ea28398d3bc9d7030e2380c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.274711   15128 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt
	I0428 17:07:45.286842   15128 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key
	I0428 17:07:45.287930   15128 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key
	I0428 17:07:45.288916   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt with IP's: []
	I0428 17:07:45.392345   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt ...
	I0428 17:07:45.392345   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt: {Name:mk043c6e778c0a46cac3b2815bc508f265aae077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.394630   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key ...
	I0428 17:07:45.394630   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key: {Name:mk9cbeba2bc7745cd3561dc98b61ab1be7e0e2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.395971   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 17:07:45.396701   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 17:07:45.396840   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 17:07:45.396982   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 17:07:45.397123   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 17:07:45.404414   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 17:07:45.405312   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem (1338 bytes)
	W0428 17:07:45.405975   15128 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228_empty.pem, impossibly tiny 0 bytes
	I0428 17:07:45.406015   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0428 17:07:45.406268   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0428 17:07:45.406623   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0428 17:07:45.406886   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0428 17:07:45.407157   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem (1708 bytes)
	I0428 17:07:45.407157   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /usr/share/ca-certificates/32282.pem
	I0428 17:07:45.407872   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:45.408049   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem -> /usr/share/ca-certificates/3228.pem
	I0428 17:07:45.408290   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 17:07:45.465598   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0428 17:07:45.514624   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 17:07:45.563309   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 17:07:45.610689   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0428 17:07:45.668205   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0428 17:07:45.709224   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 17:07:45.760227   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0428 17:07:45.808948   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /usr/share/ca-certificates/32282.pem (1708 bytes)
	I0428 17:07:45.867908   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 17:07:45.915616   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem --> /usr/share/ca-certificates/3228.pem (1338 bytes)
	I0428 17:07:45.964791   15128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 17:07:46.023214   15128 ssh_runner.go:195] Run: openssl version
	I0428 17:07:46.048823   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32282.pem && ln -fs /usr/share/ca-certificates/32282.pem /etc/ssl/certs/32282.pem"
	I0428 17:07:46.088573   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.097176   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.109096   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.132635   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32282.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 17:07:46.166258   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 17:07:46.204585   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.212881   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.228291   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.251359   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 17:07:46.286250   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3228.pem && ln -fs /usr/share/ca-certificates/3228.pem /etc/ssl/certs/3228.pem"
	I0428 17:07:46.330437   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.337213   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.348616   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.369695   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3228.pem /etc/ssl/certs/51391683.0"
	I0428 17:07:46.404629   15128 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 17:07:46.416103   15128 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 17:07:46.416103   15128 kubeadm.go:391] StartCluster: {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:07:46.427776   15128 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 17:07:46.462126   15128 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 17:07:46.492998   15128 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 17:07:46.525017   15128 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 17:07:46.543389   15128 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 17:07:46.543449   15128 kubeadm.go:156] found existing configuration files:
	
	I0428 17:07:46.559558   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 17:07:46.576906   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 17:07:46.591547   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 17:07:46.622617   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 17:07:46.643274   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 17:07:46.657479   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 17:07:46.687575   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.704724   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 17:07:46.717169   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.749254   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 17:07:46.767247   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 17:07:46.779268   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 17:07:46.798138   15128 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0428 17:07:47.295492   15128 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0428 17:08:03.206037   15128 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0428 17:08:03.206217   15128 kubeadm.go:309] [preflight] Running pre-flight checks
	I0428 17:08:03.206547   15128 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0428 17:08:03.206720   15128 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0428 17:08:03.207017   15128 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0428 17:08:03.207166   15128 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 17:08:03.211078   15128 out.go:204]   - Generating certificates and keys ...
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0428 17:08:03.212047   15128 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0428 17:08:03.212253   15128 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0428 17:08:03.212452   15128 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.213396   15128 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 17:08:03.214403   15128 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 17:08:03.214647   15128 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 17:08:03.217496   15128 out.go:204]   - Booting up control plane ...
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 17:08:03.218523   15128 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 17:08:03.218673   15128 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0428 17:08:03.218845   15128 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002004724s
	I0428 17:08:03.219380   15128 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0428 17:08:03.219512   15128 kubeadm.go:309] [api-check] The API server is healthy after 9.018382318s
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0428 17:08:03.219547   15128 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0428 17:08:03.219547   15128 kubeadm.go:309] [mark-control-plane] Marking the node ha-267500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0428 17:08:03.219547   15128 kubeadm.go:309] [bootstrap-token] Using token: o2t0fz.gqoxv8rhmbtgnafl
	I0428 17:08:03.222077   15128 out.go:204]   - Configuring RBAC rules ...
	I0428 17:08:03.223255   15128 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0428 17:08:03.223390   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0428 17:08:03.223700   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0428 17:08:03.224022   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0428 17:08:03.224356   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0428 17:08:03.224673   15128 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0428 17:08:03.224822   15128 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0428 17:08:03.224822   15128 kubeadm.go:309] 
	I0428 17:08:03.224822   15128 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0428 17:08:03.225393   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.226084   15128 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0428 17:08:03.226084   15128 kubeadm.go:309] 
	I0428 17:08:03.226252   15128 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0428 17:08:03.226279   15128 kubeadm.go:309] 
	I0428 17:08:03.226368   15128 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0428 17:08:03.226368   15128 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0428 17:08:03.226368   15128 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0428 17:08:03.226368   15128 kubeadm.go:309] 
	I0428 17:08:03.226941   15128 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0428 17:08:03.227102   15128 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0428 17:08:03.227102   15128 kubeadm.go:309] 
	I0428 17:08:03.227370   15128 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--control-plane 
	I0428 17:08:03.227509   15128 kubeadm.go:309] 
	I0428 17:08:03.227814   15128 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0428 17:08:03.227814   15128 kubeadm.go:309] 
	I0428 17:08:03.228020   15128 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.228020   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c 
	I0428 17:08:03.228020   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:08:03.228020   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:08:03.230920   15128 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0428 17:08:03.245586   15128 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0428 17:08:03.254991   15128 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0428 17:08:03.255049   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0428 17:08:03.307618   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0428 17:08:04.087321   15128 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 17:08:04.101185   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-267500 minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=ha-267500 minikube.k8s.io/primary=true
	I0428 17:08:04.110392   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.127454   15128 ops.go:34] apiserver oom_adj: -16
	I0428 17:08:04.338961   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.853452   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.339051   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.843300   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.345394   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.842588   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.347466   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.845426   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.343954   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.844666   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.346016   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.847106   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.346157   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.852073   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.350599   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.851124   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.339498   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.839469   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.341674   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.844363   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.340478   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.840892   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.351020   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.542789   15128 kubeadm.go:1107] duration metric: took 11.4553488s to wait for elevateKubeSystemPrivileges
	W0428 17:08:15.542884   15128 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0428 17:08:15.542948   15128 kubeadm.go:393] duration metric: took 29.1267984s to StartCluster
	I0428 17:08:15.542948   15128 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.543147   15128 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:15.545087   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.546714   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0428 17:08:15.546792   15128 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:15.546862   15128 start.go:240] waiting for startup goroutines ...
	I0428 17:08:15.546921   15128 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0428 17:08:15.547043   15128 addons.go:69] Setting storage-provisioner=true in profile "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:234] Setting addon storage-provisioner=true in "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:69] Setting default-storageclass=true in profile "ha-267500"
	I0428 17:08:15.547186   15128 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-267500"
	I0428 17:08:15.547186   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:15.547418   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.760123   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0428 17:08:16.117515   15128 start.go:946] {"host.minikube.internal": 172.27.224.1} host record injected into CoreDNS's ConfigMap
	I0428 17:08:17.727218   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.731020   15128 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 17:08:17.728718   15128 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:17.731866   15128 kapi.go:59] client config for ha-267500: &rest.Config{Host:"https://172.27.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 17:08:17.733765   15128 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:17.733849   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0428 17:08:17.733849   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:17.735131   15128 cert_rotation.go:137] Starting client certificate rotation controller
	I0428 17:08:17.735131   15128 addons.go:234] Setting addon default-storageclass=true in "ha-267500"
	I0428 17:08:17.735756   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:17.736495   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.022150   15128 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:20.022150   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.024648   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.176019   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:22.176993   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.177104   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.649653   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:22.838833   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:23.942043   15128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1032083s)
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:24.736869   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:24.878922   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:25.036824   15128 round_trippers.go:463] GET https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0428 17:08:25.036824   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.036824   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.036824   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.047850   15128 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0428 17:08:25.050270   15128 round_trippers.go:463] PUT https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0428 17:08:25.050270   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Content-Type: application/json
	I0428 17:08:25.050270   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.054895   15128 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 17:08:25.058644   15128 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0428 17:08:25.062323   15128 addons.go:505] duration metric: took 9.5154456s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0428 17:08:25.062323   15128 start.go:245] waiting for cluster config update ...
	I0428 17:08:25.062323   15128 start.go:254] writing updated cluster config ...
	I0428 17:08:25.064855   15128 out.go:177] 
	I0428 17:08:25.074876   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:25.074876   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.081680   15128 out.go:177] * Starting "ha-267500-m02" control-plane node in "ha-267500" cluster
	I0428 17:08:25.084831   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:08:25.084949   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:08:25.085245   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:08:25.085467   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:08:25.085668   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.089909   15128 start.go:360] acquireMachinesLock for ha-267500-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:08:25.089909   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500-m02"
	I0428 17:08:25.089909   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:25.089909   15128 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0428 17:08:25.092669   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:08:25.092669   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:08:25.092669   15128 client.go:168] LocalClient.Create starting
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:08:26.932082   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:08:26.932249   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:26.932469   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:08:28.625007   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:08:28.625741   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:28.625836   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:30.145128   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:30.145193   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:30.145352   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:33.641047   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:33.641341   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:33.643919   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:08:34.107074   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:08:34.283136   15128 main.go:141] libmachine: Creating VM...
	I0428 17:08:34.284168   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:37.085226   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:37.085497   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:38.799740   15128 main.go:141] libmachine: Creating VHD
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1C4811B2-F108-4C17-8C85-240087500FFB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:08:42.443176   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:08:45.530814   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -SizeBytes 20000MB
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:08:51.507051   15128 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-267500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:08:51.507121   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:51.507184   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500-m02 -DynamicMemoryEnabled $false
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:53.623959   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500-m02 -Count 2
	I0428 17:08:55.746706   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:55.747282   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:55.747376   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\boot2docker.iso'
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:58.231298   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd'
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: Starting VM...
	I0428 17:09:00.819246   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500-m02
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:08.535107   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:08.535676   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:09.540110   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:11.730252   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:11.730767   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:11.730896   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:14.267320   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:14.267920   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:15.278102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:17.429662   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:19.872667   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:19.873239   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:20.874059   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:23.049283   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:25.483021   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:25.483840   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:26.497330   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:28.593193   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:31.092830   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:33.155893   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:33.156190   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:33.156190   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:09:33.156343   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:37.708958   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:37.709094   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:37.715262   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:37.715453   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:37.715453   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:09:37.838307   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:09:37.838307   15128 buildroot.go:166] provisioning hostname "ha-267500-m02"
	I0428 17:09:37.838307   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:39.845337   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:39.845507   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:39.845582   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:42.372033   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:42.372654   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:42.379934   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:42.380083   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:42.380083   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500-m02 && echo "ha-267500-m02" | sudo tee /etc/hostname
	I0428 17:09:42.534583   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500-m02
	
	I0428 17:09:42.534727   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:44.674240   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:47.257595   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:47.258189   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:47.258189   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:09:47.404787   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:09:47.404787   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:09:47.404787   15128 buildroot.go:174] setting up certificates
	I0428 17:09:47.404787   15128 provision.go:84] configureAuth start
	I0428 17:09:47.404787   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:51.875853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:53.926853   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:53.927030   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:53.927102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:56.411706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:56.412682   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:56.412682   15128 provision.go:143] copyHostCerts
	I0428 17:09:56.412881   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:09:56.413201   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:09:56.413201   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:09:56.413699   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:09:56.414916   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:09:56.415172   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:09:56.417043   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:09:56.417043   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:09:56.417043   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:09:56.417691   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:09:56.418448   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500-m02 san=[127.0.0.1 172.27.238.86 ha-267500-m02 localhost minikube]
	I0428 17:09:56.698158   15128 provision.go:177] copyRemoteCerts
	I0428 17:09:56.713232   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:09:56.713232   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:58.727438   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:58.728437   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:58.728572   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:01.200219   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:01.303703   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5904121s)
	I0428 17:10:01.303703   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:10:01.304216   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:10:01.351115   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:10:01.351613   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0428 17:10:01.399941   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:10:01.400279   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:10:01.447643   15128 provision.go:87] duration metric: took 14.0428334s to configureAuth
	I0428 17:10:01.447643   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:10:01.448198   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:10:01.448388   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:03.470041   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:05.925618   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:05.926194   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:05.926194   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:10:06.056503   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:10:06.056605   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:10:06.056795   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:10:06.056855   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:08.084596   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:10.593844   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:10.594210   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:10.600708   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:10.601470   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:10.601470   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.226.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:10:10.751881   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.226.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:10:10.751947   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:12.904363   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:15.479691   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:15.479915   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:15.486849   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:15.487030   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:15.487030   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:10:17.663081   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:10:17.663081   15128 machine.go:97] duration metric: took 44.506824s to provisionDockerMachine
	I0428 17:10:17.663081   15128 client.go:171] duration metric: took 1m52.570239s to LocalClient.Create
	I0428 17:10:17.663081   15128 start.go:167] duration metric: took 1m52.570239s to libmachine.API.Create "ha-267500"
	I0428 17:10:17.663081   15128 start.go:293] postStartSetup for "ha-267500-m02" (driver="hyperv")
	I0428 17:10:17.663081   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:10:17.677002   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:10:17.677002   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:19.758853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:22.318985   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:22.423330   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7463207s)
	I0428 17:10:22.436053   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:10:22.443505   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:10:22.443505   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:10:22.444052   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:10:22.445207   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:10:22.445207   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:10:22.458722   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:10:22.477786   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:10:22.526087   15128 start.go:296] duration metric: took 4.8629979s for postStartSetup
	I0428 17:10:22.528901   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:27.084100   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:10:27.086385   15128 start.go:128] duration metric: took 2m1.9962875s to createHost
	I0428 17:10:27.086385   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:29.131174   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:31.572065   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:31.572369   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:31.578077   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:31.578656   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:31.578656   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349431.710726684
	
	I0428 17:10:31.707789   15128 fix.go:216] guest clock: 1714349431.710726684
	I0428 17:10:31.707789   15128 fix.go:229] Guest: 2024-04-28 17:10:31.710726684 -0700 PDT Remote: 2024-04-28 17:10:27.0863856 -0700 PDT m=+326.552805801 (delta=4.624341084s)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:36.218864   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:36.219399   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:36.219663   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349431
	I0428 17:10:36.353520   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:10:31 UTC 2024
	
	I0428 17:10:36.353602   15128 fix.go:236] clock set: Mon Apr 29 00:10:31 UTC 2024
	 (err=<nil>)
	I0428 17:10:36.353602   15128 start.go:83] releasing machines lock for "ha-267500-m02", held for 2m11.26349s
	I0428 17:10:36.353795   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:38.401891   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:40.883767   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:40.883929   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:40.887007   15128 out.go:177] * Found network options:
	I0428 17:10:40.889514   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.892316   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.894427   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.897007   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 17:10:40.898142   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.900035   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:10:40.900035   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:40.912127   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 17:10:40.913152   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:43.021173   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.602076   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.622078   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.622258   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.622506   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.694842   15128 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7816825s)
	W0428 17:10:45.694980   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:10:45.707857   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:10:45.811368   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:10:45.811368   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:45.811368   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.911325s)
	I0428 17:10:45.811813   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:45.869634   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:10:45.905032   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:10:45.930324   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:10:45.946027   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:10:45.978279   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.013710   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:10:46.061695   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.102008   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:10:46.135573   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:10:46.171642   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:10:46.204807   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:10:46.239021   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:10:46.271655   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:10:46.306942   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:46.514038   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:10:46.544941   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:46.560491   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:10:46.605547   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.654104   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:10:46.708544   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.748048   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.784762   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:10:46.849187   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.873497   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:46.927545   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:10:46.944545   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:10:46.962213   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:10:47.010730   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:10:47.237397   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:10:47.429784   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:10:47.429870   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:10:47.474822   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:47.662962   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:11:48.797471   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1344114s)
	I0428 17:11:48.811984   15128 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0428 17:11:48.846867   15128 out.go:177] 
	W0428 17:11:48.851004   15128 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 00:10:16 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.119534579Z" level=info msg="Starting up"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.120740894Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.121661806Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.164120251Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189883081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189945482Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190009182Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190026683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190220685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190263486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190520589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190669591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190716191Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190728492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190839193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.191192898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194247737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194367638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194558841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194663742Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194795944Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195368451Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195462552Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220446573Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220530874Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220815977Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220940379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220961379Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221231583Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221822990Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222033793Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222143394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222181895Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222200695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222229595Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222251396Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222320897Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222367097Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222383497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222398798Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222414398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222438198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222458898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222474399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222508799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222524499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222540899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222555500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222572000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222588200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222612300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222628301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222643801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222659801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222679401Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222703802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222745302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222782703Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222911604Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222975905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222992605Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223005105Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223156807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223197908Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223212708Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229340687Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229588390Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.230467901Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.231131810Z" level=info msg="containerd successfully booted in 0.070317s"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.196765446Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.225741894Z" level=info msg="Loading containers: start."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.520224287Z" level=info msg="Loading containers: done."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.548826467Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.549157372Z" level=info msg="Daemon has completed initialization"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663745997Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663852398Z" level=info msg="API listen on [::]:2376"
	Apr 29 00:10:17 ha-267500-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 00:10:47 ha-267500-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.694032846Z" level=info msg="Processing signal 'terminated'"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696514258Z" level=info msg="Daemon shutdown complete"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696708859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696755859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696775959Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:48 ha-267500-m02 dockerd[1016]: time="2024-04-29T00:10:48.770678285Z" level=info msg="Starting up"
	Apr 29 00:11:48 ha-267500-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0428 17:11:48.851004   15128 out.go:239] * 
	W0428 17:11:48.852842   15128 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 17:11:48.855427   15128 out.go:177] 
	
	
	==> Docker <==
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.828282248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.828737849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.969544430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.969810031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.970023532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:08:30 ha-267500 dockerd[1322]: time="2024-04-29T00:08:30.971462136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:12:08 ha-267500 dockerd[1316]: 2024/04/29 00:12:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:12:08 ha-267500 dockerd[1316]: 2024/04/29 00:12:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:12:09 ha-267500 dockerd[1316]: 2024/04/29 00:12:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:12:09 ha-267500 dockerd[1316]: 2024/04/29 00:12:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:12:09 ha-267500 dockerd[1316]: 2024/04/29 00:12:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:12:09 ha-267500 dockerd[1316]: 2024/04/29 00:12:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:12:09 ha-267500 dockerd[1316]: 2024/04/29 00:12:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:12:09 ha-267500 dockerd[1316]: 2024/04/29 00:12:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:12:09 ha-267500 dockerd[1316]: 2024/04/29 00:12:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:12:23 ha-267500 dockerd[1322]: time="2024-04-29T00:12:23.295389744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 00:12:23 ha-267500 dockerd[1322]: time="2024-04-29T00:12:23.295470544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 00:12:23 ha-267500 dockerd[1322]: time="2024-04-29T00:12:23.295484444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:12:23 ha-267500 dockerd[1322]: time="2024-04-29T00:12:23.295576143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:12:23 ha-267500 cri-dockerd[1225]: time="2024-04-29T00:12:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9e5d506c62d643e183acfa3bf809dae3fd3586a0c0e861873ab6dea691c8a1d2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 00:12:24 ha-267500 cri-dockerd[1225]: time="2024-04-29T00:12:24Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 29 00:12:24 ha-267500 dockerd[1322]: time="2024-04-29T00:12:24.879102600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 00:12:24 ha-267500 dockerd[1322]: time="2024-04-29T00:12:24.879417500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 00:12:24 ha-267500 dockerd[1322]: time="2024-04-29T00:12:24.879449000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:12:24 ha-267500 dockerd[1322]: time="2024-04-29T00:12:24.881361301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8d1eabc40263       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   9e5d506c62d64       busybox-fc5497c4f-5xln2
	863860b786b42       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   0                   c1f590ad490fe       coredns-7db6d8ff4d-p7tjz
	f85260746d557       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   0                   586f91a6b0d3d       coredns-7db6d8ff4d-2d6ct
	f23ff280b691c       6e38f40d628db                                                                                         15 minutes ago      Running             storage-provisioner       0                   4f7c6837c24bd       storage-provisioner
	31e97721c439f       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              15 minutes ago      Running             kindnet-cni               0                   9a810f16fad2b       kindnet-6pr2b
	b505176bff8dd       a0bf559e280cf                                                                                         15 minutes ago      Running             kube-proxy                0                   f041e2ebf6955       kube-proxy-59kz7
	e8de8cc5d0941       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     16 minutes ago      Running             kube-vip                  0                   5e6adedaca2d1       kube-vip-ha-267500
	1bb77467f58fc       3861cfcd7c04c                                                                                         16 minutes ago      Running             etcd                      0                   bd2f63e7ff884       etcd-ha-267500
	e3f1a76ec8d43       c42f13656d0b2                                                                                         16 minutes ago      Running             kube-apiserver            0                   1aac39df0e147       kube-apiserver-ha-267500
	8e1e8e3ae83a4       259c8277fcbbc                                                                                         16 minutes ago      Running             kube-scheduler            0                   59e9e09e1fe2e       kube-scheduler-ha-267500
	988ba6e93dbd2       c7aad43836fa5                                                                                         16 minutes ago      Running             kube-controller-manager   0                   b062edd237fa4       kube-controller-manager-ha-267500
	
	
	==> coredns [863860b786b4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56042 - 38920 "HINFO IN 6310058863699759000.886894576477842994. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026858243s
	[INFO] 10.244.0.4:52183 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.109912239s
	[INFO] 10.244.0.4:36966 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.019781143s
	[INFO] 10.244.0.4:50436 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.124688347s
	[INFO] 10.244.0.4:39307 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231401s
	[INFO] 10.244.0.4:48774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000438101s
	[INFO] 10.244.0.4:55657 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001919s
	
	
	==> coredns [f85260746d55] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52661 - 10332 "HINFO IN 6890724632724915343.2842102422429648823. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049972505s
	[INFO] 10.244.0.4:36002 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189801s
	[INFO] 10.244.0.4:39517 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002061s
	[INFO] 10.244.0.4:58443 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.132665688s
	[INFO] 10.244.0.4:58628 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000428701s
	[INFO] 10.244.0.4:35412 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002027s
	[INFO] 10.244.0.4:55943 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.02269265s
	[INFO] 10.244.0.4:41245 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000423501s
	[INFO] 10.244.0.4:57855 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168501s
	[INFO] 10.244.0.4:59251 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000973s
	[INFO] 10.244.0.4:49224 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000193501s
	
	
	==> describe nodes <==
	Name:               ha-267500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-267500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-267500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:08:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-267500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:24:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:22:50 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:22:50 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:22:50 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:22:50 +0000   Mon, 29 Apr 2024 00:08:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.226.61
	  Hostname:    ha-267500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 077cacd754b64c3dad0beeef28749850
	  System UUID:                961ce819-6c1b-c24a-99df-3205dca32605
	  Boot ID:                    bb08693c-1f82-4307-a58c-bdcce00f2d7a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5xln2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-2d6ct             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-p7tjz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-267500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-6pr2b                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-267500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-267500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-59kz7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-267500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-267500                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node ha-267500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node ha-267500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m   node-controller  Node ha-267500 event: Registered Node ha-267500 in Controller
	  Normal  NodeReady                15m   kubelet          Node ha-267500 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr29 00:06] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.760915] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.419480] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.183676] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[Apr29 00:07] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.112445] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.557599] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.220083] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.252325] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.857578] systemd-fstab-generator[1178]: Ignoring "noauto" option for root device
	[  +0.206645] systemd-fstab-generator[1190]: Ignoring "noauto" option for root device
	[  +0.195057] systemd-fstab-generator[1202]: Ignoring "noauto" option for root device
	[  +0.281554] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[ +11.671296] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.127733] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.851029] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +6.965698] systemd-fstab-generator[1723]: Ignoring "noauto" option for root device
	[  +0.101314] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.149606] kauditd_printk_skb: 67 callbacks suppressed
	[Apr29 00:08] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[ +14.798165] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.098725] kauditd_printk_skb: 29 callbacks suppressed
	[Apr29 00:12] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [1bb77467f58f] <==
	{"level":"info","ts":"2024-04-29T00:07:55.130032Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:07:55.135448Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:07:55.147187Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T00:07:55.148136Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ff0b9df26eb7be34","local-member-id":"c914a6e18288a53b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:07:55.150079Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:07:55.150305Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:07:55.161955Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:07:55.135369Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c914a6e18288a53b","local-member-attributes":"{Name:ha-267500 ClientURLs:[https://172.27.226.61:2379]}","request-path":"/0/members/c914a6e18288a53b/attributes","cluster-id":"ff0b9df26eb7be34","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T00:07:55.210715Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T00:07:55.218498Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T00:07:55.218434Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.226.61:2379"}
	2024/04/29 00:08:02 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-29T00:08:23.622806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.399264ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11906267438321679815 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" value_size:641 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T00:08:23.623008Z","caller":"traceutil/trace.go:171","msg":"trace[348685892] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"241.172088ms","start":"2024-04-29T00:08:23.381821Z","end":"2024-04-29T00:08:23.622994Z","steps":["trace[348685892] 'process raft request'  (duration: 19.993123ms)","trace[348685892] 'compare'  (duration: 220.074764ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T00:08:23.828397Z","caller":"traceutil/trace.go:171","msg":"trace[1858965510] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"194.829232ms","start":"2024-04-29T00:08:23.633549Z","end":"2024-04-29T00:08:23.828378Z","steps":["trace[1858965510] 'process raft request'  (duration: 188.756825ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:08:35.320112Z","caller":"traceutil/trace.go:171","msg":"trace[1514672440] transaction","detail":"{read_only:false; response_revision:469; number_of_response:1; }","duration":"139.755985ms","start":"2024-04-29T00:08:35.180333Z","end":"2024-04-29T00:08:35.320089Z","steps":["trace[1514672440] 'process raft request'  (duration: 139.641088ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:08:35.326839Z","caller":"traceutil/trace.go:171","msg":"trace[147414067] transaction","detail":"{read_only:false; response_revision:470; number_of_response:1; }","duration":"143.072983ms","start":"2024-04-29T00:08:35.183755Z","end":"2024-04-29T00:08:35.326828Z","steps":["trace[147414067] 'process raft request'  (duration: 142.867489ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:08:38.476431Z","caller":"traceutil/trace.go:171","msg":"trace[1245678643] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"108.116145ms","start":"2024-04-29T00:08:38.368296Z","end":"2024-04-29T00:08:38.476412Z","steps":["trace[1245678643] 'process raft request'  (duration: 108.004749ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:08:39.463077Z","caller":"traceutil/trace.go:171","msg":"trace[192678874] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"113.436313ms","start":"2024-04-29T00:08:39.349621Z","end":"2024-04-29T00:08:39.463057Z","steps":["trace[192678874] 'process raft request'  (duration: 113.028826ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:17:56.642961Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":964}
	{"level":"info","ts":"2024-04-29T00:17:56.66754Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":964,"took":"23.795536ms","hash":3071087103,"current-db-size-bytes":2490368,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2490368,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-04-29T00:17:56.667625Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3071087103,"revision":964,"compact-revision":-1}
	{"level":"info","ts":"2024-04-29T00:22:56.666326Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1501}
	{"level":"info","ts":"2024-04-29T00:22:56.675835Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1501,"took":"9.066919ms","hash":954752441,"current-db-size-bytes":2490368,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1806336,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-29T00:22:56.676016Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":954752441,"revision":1501,"compact-revision":964}
	
	
	==> kernel <==
	 00:24:11 up 18 min,  0 users,  load average: 0.15, 0.28, 0.31
	Linux ha-267500 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [31e97721c439] <==
	I0429 00:22:05.825584       1 main.go:227] handling current node
	I0429 00:22:15.831984       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:22:15.832494       1 main.go:227] handling current node
	I0429 00:22:25.842401       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:22:25.842500       1 main.go:227] handling current node
	I0429 00:22:35.857498       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:22:35.857540       1 main.go:227] handling current node
	I0429 00:22:45.864029       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:22:45.864127       1 main.go:227] handling current node
	I0429 00:22:55.880064       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:22:55.880110       1 main.go:227] handling current node
	I0429 00:23:05.895634       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:23:05.895778       1 main.go:227] handling current node
	I0429 00:23:15.909066       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:23:15.909161       1 main.go:227] handling current node
	I0429 00:23:25.922382       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:23:25.922424       1 main.go:227] handling current node
	I0429 00:23:35.935079       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:23:35.935169       1 main.go:227] handling current node
	I0429 00:23:45.947421       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:23:45.947544       1 main.go:227] handling current node
	I0429 00:23:55.963519       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:23:55.966958       1 main.go:227] handling current node
	I0429 00:24:05.973409       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:24:05.973504       1 main.go:227] handling current node
	
	
	==> kube-apiserver [e3f1a76ec8d4] <==
	E0429 00:07:58.517467       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0429 00:07:58.600561       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 00:07:59.267127       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0429 00:07:59.274054       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0429 00:07:59.274177       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 00:08:00.391728       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 00:08:00.499756       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 00:08:00.604894       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 00:08:00.617966       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.226.61]
	I0429 00:08:00.619341       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 00:08:00.626826       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 00:08:01.319490       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0429 00:08:02.484116       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0429 00:08:02.484213       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0429 00:08:02.484272       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.5µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0429 00:08:02.485404       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0429 00:08:02.486881       1 timeout.go:142] post-timeout activity - time-elapsed: 2.861712ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0429 00:08:02.642721       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 00:08:02.684736       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 00:08:02.712741       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 00:08:15.229730       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 00:08:15.308254       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 00:23:49.502033       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49293: use of closed network connection
	E0429 00:23:50.824153       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49301: use of closed network connection
	E0429 00:23:51.986308       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49309: use of closed network connection
	
	
	==> kube-controller-manager [988ba6e93dbd] <==
	I0429 00:08:15.313045       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 00:08:15.330719       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 00:08:15.755385       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="504.32555ms"
	I0429 00:08:15.767104       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 00:08:15.791065       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.201658ms"
	I0429 00:08:15.793718       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 00:08:15.797689       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 00:08:15.865935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.824341ms"
	I0429 00:08:15.866112       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="110.2µs"
	I0429 00:08:29.407024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="145.8µs"
	I0429 00:08:29.410999       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.9µs"
	I0429 00:08:29.438715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58µs"
	I0429 00:08:29.463289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.4µs"
	I0429 00:08:30.150197       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 00:08:32.178168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.562718ms"
	I0429 00:08:32.178767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="128.296µs"
	I0429 00:08:32.227761       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.198293ms"
	I0429 00:08:32.228518       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.397µs"
	I0429 00:12:22.804126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.965383ms"
	I0429 00:12:22.823038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.733135ms"
	I0429 00:12:22.823277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.2µs"
	I0429 00:12:22.828995       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.1µs"
	I0429 00:12:22.829468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.999µs"
	I0429 00:12:25.591541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.187606ms"
	I0429 00:12:25.591791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="155.1µs"
	
	
	==> kube-proxy [b505176bff8d] <==
	I0429 00:08:18.378677       1 server_linux.go:69] "Using iptables proxy"
	I0429 00:08:18.445828       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.226.61"]
	I0429 00:08:18.505105       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:08:18.505147       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:08:18.505201       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:08:18.511281       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:08:18.512271       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:08:18.512309       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:08:18.516363       1 config.go:192] "Starting service config controller"
	I0429 00:08:18.517198       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:08:18.517237       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:08:18.517245       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:08:18.524551       1 config.go:319] "Starting node config controller"
	I0429 00:08:18.524570       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:08:18.618172       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 00:08:18.618299       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:08:18.624657       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8e1e8e3ae83a] <==
	W0429 00:07:59.408672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 00:07:59.409434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 00:07:59.614629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.614883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.614630       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.616141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.671538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.671604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.688105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 00:07:59.688348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 00:07:59.699454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 00:07:59.699500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 00:07:59.827114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 00:07:59.827663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 00:07:59.863569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 00:07:59.864226       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 00:07:59.922434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 00:07:59.922488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 00:07:59.934988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 00:07:59.935206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 00:07:59.935823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 00:07:59.936001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 00:07:59.940321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 00:07:59.940831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0429 00:08:01.614591       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 00:20:02 ha-267500 kubelet[2223]: E0429 00:20:02.770410    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:20:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:20:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:20:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:20:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:21:02 ha-267500 kubelet[2223]: E0429 00:21:02.772410    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:21:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:21:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:21:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:21:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:22:02 ha-267500 kubelet[2223]: E0429 00:22:02.769460    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:22:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:22:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:22:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:22:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:23:02 ha-267500 kubelet[2223]: E0429 00:23:02.766809    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:23:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:23:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:23:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:23:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:24:02 ha-267500 kubelet[2223]: E0429 00:24:02.770172    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:24:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:24:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:24:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:24:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [f23ff280b691] <==
	I0429 00:08:31.052093       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 00:08:31.098437       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 00:08:31.104173       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 00:08:31.136173       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 00:08:31.136819       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-267500_c29b34a4-e5d1-441c-af40-1ba1265b4632!
	I0429 00:08:31.138081       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56b0b310-7342-47c9-9240-aab5b4e4fa99", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-267500_c29b34a4-e5d1-441c-af40-1ba1265b4632 became leader
	I0429 00:08:31.238456       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-267500_c29b34a4-e5d1-441c-af40-1ba1265b4632!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:24:04.449066    8300 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500: (11.6176614s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-267500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-jxx6x busybox-fc5497c4f-wg44s
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-267500 describe pod busybox-fc5497c4f-jxx6x busybox-fc5497c4f-wg44s
helpers_test.go:282: (dbg) kubectl --context ha-267500 describe pod busybox-fc5497c4f-jxx6x busybox-fc5497c4f-wg44s:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-jxx6x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wsmns (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-wsmns:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  113s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-fc5497c4f-wg44s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bv7kl (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-bv7kl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  113s (x4 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (722.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (44.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-5xln2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-5xln2 -- sh -c "ping -c 1 172.27.224.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-5xln2 -- sh -c "ping -c 1 172.27.224.1": exit status 1 (10.4462048s)

                                                
                                                
-- stdout --
	PING 172.27.224.1 (172.27.224.1): 56 data bytes
	
	--- 172.27.224.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:24:25.996549    4936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.27.224.1) from pod (busybox-fc5497c4f-5xln2): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-jxx6x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-jxx6x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (354.1926ms)

                                                
                                                
** stderr ** 
	W0428 17:24:36.439027    7016 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-jxx6x does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-fc5497c4f-jxx6x could not resolve 'host.minikube.internal': exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-wg44s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-267500 -- exec busybox-fc5497c4f-wg44s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (356.0694ms)

                                                
                                                
** stderr ** 
	W0428 17:24:36.806863    8408 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-wg44s does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-fc5497c4f-wg44s could not resolve 'host.minikube.internal': exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500: (11.5538238s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-267500 logs -n 25: (8.0535417s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT | 28 Apr 24 17:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT | 28 Apr 24 17:24 PDT |
	|         | busybox-fc5497c4f-5xln2              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-5xln2 -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.224.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-wg44s              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 17:05:00
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 17:05:00.635889   15128 out.go:291] Setting OutFile to fd 1448 ...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.636883   15128 out.go:304] Setting ErrFile to fd 980...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.660527   15128 out.go:298] Setting JSON to false
	I0428 17:05:00.664060   15128 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6543,"bootTime":1714342556,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 17:05:00.664060   15128 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 17:05:00.669160   15128 out.go:177] * [ha-267500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 17:05:00.673143   15128 notify.go:220] Checking for updates...
	I0428 17:05:00.675298   15128 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:05:00.677914   15128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 17:05:00.680526   15128 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 17:05:00.682871   15128 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 17:05:00.686326   15128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 17:05:00.689521   15128 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 17:05:05.728109   15128 out.go:177] * Using the hyperv driver based on user configuration
	I0428 17:05:05.733726   15128 start.go:297] selected driver: hyperv
	I0428 17:05:05.733726   15128 start.go:901] validating driver "hyperv" against <nil>
	I0428 17:05:05.733888   15128 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 17:05:05.779166   15128 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 17:05:05.780739   15128 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 17:05:05.780739   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:05:05.780739   15128 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0428 17:05:05.780739   15128 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0428 17:05:05.780739   15128 start.go:340] cluster config:
	{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:05:05.781443   15128 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 17:05:05.786272   15128 out.go:177] * Starting "ha-267500" primary control-plane node in "ha-267500" cluster
	I0428 17:05:05.789365   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:05:05.790343   15128 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 17:05:05.790343   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:05:05.790810   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:05:05.791000   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:05:05.791210   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:05:05.791210   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json: {Name:mk9d04dce876aeea74569e2a12d8158542a180a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:360] acquireMachinesLock for ha-267500: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500"
	I0428 17:05:05.793473   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:05:05.793473   15128 start.go:125] createHost starting for "" (driver="hyperv")
	I0428 17:05:05.798458   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:05:05.798458   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:05:05.799075   15128 client.go:168] LocalClient.Create starting
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:05:07.765342   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:05:07.765366   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:07.765483   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:05:09.466609   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:10.942750   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:14.309202   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:05:14.797607   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: Creating VM...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:17.596457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:17.596534   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:17.596629   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:05:17.596740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:19.370912   15128 main.go:141] libmachine: Creating VHD
	I0428 17:05:19.370912   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:05:22.987163   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6323F08D-1941-41F6-AECD-59FDB38477C4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:05:22.987787   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:22.987787   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:05:22.987950   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:05:22.997062   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:05:26.067081   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:26.067395   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:26.067482   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -SizeBytes 20000MB
	I0428 17:05:28.607147   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-267500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:32.186340   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500 -DynamicMemoryEnabled $false
	I0428 17:05:34.304828   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500 -Count 2
	I0428 17:05:36.364288   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:36.365155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:36.365244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\boot2docker.iso'
	I0428 17:05:38.788294   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd'
	I0428 17:05:41.250474   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: Starting VM...
	I0428 17:05:41.251660   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:48.796976   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:48.797051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:49.812421   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:51.911514   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:51.912240   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:51.912333   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:54.389553   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:54.389603   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:55.396985   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:57.532241   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:59.865311   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:59.865354   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:00.867371   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:06.311485   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:10.915736   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:10.916779   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:10.916848   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:12.945722   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:14.977649   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:17.403860   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:17.413822   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:17.413822   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:06:17.548827   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:06:17.549001   15128 buildroot.go:166] provisioning hostname "ha-267500"
	I0428 17:06:17.549001   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:21.963707   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:21.963891   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:21.969614   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:21.970234   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:21.970287   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500 && echo "ha-267500" | sudo tee /etc/hostname
	I0428 17:06:22.125673   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500
	
	I0428 17:06:22.125673   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:24.116148   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:26.498042   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:26.498298   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:26.504621   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:26.505426   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:26.505426   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:06:26.654593   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:06:26.654745   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:06:26.654745   15128 buildroot.go:174] setting up certificates
	I0428 17:06:26.654878   15128 provision.go:84] configureAuth start
	I0428 17:06:26.654974   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:28.643033   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:31.047712   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:33.032385   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:33.033114   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:33.033244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:35.470487   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:35.470551   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:35.470602   15128 provision.go:143] copyHostCerts
	I0428 17:06:35.470602   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:06:35.470602   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:06:35.470602   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:06:35.471409   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:06:35.472302   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:06:35.472302   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:06:35.474368   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:06:35.475508   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:06:35.477084   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500 san=[127.0.0.1 172.27.226.61 ha-267500 localhost minikube]
	I0428 17:06:35.561808   15128 provision.go:177] copyRemoteCerts
	I0428 17:06:35.577487   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:06:35.577487   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:37.564802   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:40.009619   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:06:40.122812   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5453174s)
	I0428 17:06:40.122812   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:06:40.124516   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:06:40.170921   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:06:40.171551   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0428 17:06:40.219603   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:06:40.219603   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:06:40.266084   15128 provision.go:87] duration metric: took 13.6111193s to configureAuth
	I0428 17:06:40.266084   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:06:40.266857   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:06:40.267021   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:42.241914   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:44.637923   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:44.637923   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:44.637923   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:06:44.774113   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:06:44.774113   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:06:44.774113   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:06:44.774650   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:46.777708   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:46.778317   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:46.778401   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:49.187437   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:49.187970   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:49.188102   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:06:49.338418   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:06:49.339201   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:51.331459   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:53.762358   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:53.763024   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:53.763024   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:06:55.964469   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:06:55.964469   15128 machine.go:97] duration metric: took 43.0186778s to provisionDockerMachine
	I0428 17:06:55.964469   15128 client.go:171] duration metric: took 1m50.1652174s to LocalClient.Create
	I0428 17:06:55.964469   15128 start.go:167] duration metric: took 1m50.1658343s to libmachine.API.Create "ha-267500"
	I0428 17:06:55.965115   15128 start.go:293] postStartSetup for "ha-267500" (driver="hyperv")
	I0428 17:06:55.965216   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:06:55.979546   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:06:55.979546   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:57.968316   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:57.969137   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:57.969264   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:00.415449   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:00.415502   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:00.415502   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:00.529139   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5495858s)
	I0428 17:07:00.542143   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:07:00.550032   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:07:00.550213   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:07:00.550570   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:07:00.551284   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:07:00.551284   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:07:00.565509   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:07:00.584743   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:07:00.629457   15128 start.go:296] duration metric: took 4.6642336s for postStartSetup
	I0428 17:07:00.635014   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:02.626728   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:02.627487   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:02.627874   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:05.092989   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:05.093104   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:05.093386   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:07:05.096398   15128 start.go:128] duration metric: took 1m59.3027333s to createHost
	I0428 17:07:05.096398   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:07.065139   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:07.066155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:07.066393   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:09.551453   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:09.552365   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:09.558305   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:09.559011   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:09.559011   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:07:09.695211   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349229.688972111
	
	I0428 17:07:09.695211   15128 fix.go:216] guest clock: 1714349229.688972111
	I0428 17:07:09.695293   15128 fix.go:229] Guest: 2024-04-28 17:07:09.688972111 -0700 PDT Remote: 2024-04-28 17:07:05.096398 -0700 PDT m=+124.563135001 (delta=4.592574111s)
	I0428 17:07:09.695407   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:11.789797   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:11.789847   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:11.789990   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:14.240619   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:14.240815   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:14.240815   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349229
	I0428 17:07:14.381527   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:07:09 UTC 2024
	
	I0428 17:07:14.381591   15128 fix.go:236] clock set: Mon Apr 29 00:07:09 UTC 2024
	 (err=<nil>)
	I0428 17:07:14.381591   15128 start.go:83] releasing machines lock for "ha-267500", held for 2m8.5881066s
	I0428 17:07:14.381888   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:16.379116   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:18.842518   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:07:18.842698   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:18.852567   15128 ssh_runner.go:195] Run: cat /version.json
	I0428 17:07:18.853571   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.911012   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:20.912913   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.913115   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.913211   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:23.515321   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.515423   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.515870   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.545848   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: cat /version.json: (4.8814384s)
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8914872s)
	I0428 17:07:23.747746   15128 ssh_runner.go:195] Run: systemctl --version
	I0428 17:07:23.771255   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 17:07:23.781524   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:07:23.793701   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:07:23.822613   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:07:23.822613   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:23.822613   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:23.866813   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:07:23.903238   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:07:23.922743   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:07:23.934150   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:07:23.963653   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:23.994818   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:07:24.027248   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:24.060207   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:07:24.094263   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:07:24.140407   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:07:24.173847   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:07:24.204942   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:07:24.241686   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:07:24.271540   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:24.469049   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:07:24.498779   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:24.511314   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:07:24.547731   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.585442   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:07:24.632453   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.665555   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.704256   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:07:24.766295   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.792824   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:24.839067   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:07:24.857950   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:07:24.877113   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:07:24.928235   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:07:25.145493   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:07:25.342459   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:07:25.342632   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:07:25.392872   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:25.606530   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:28.159251   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5517925s)
	I0428 17:07:28.171034   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0428 17:07:28.211210   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.251460   15128 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0428 17:07:28.457673   15128 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0428 17:07:28.655447   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:28.858401   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0428 17:07:28.905418   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.943568   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:29.150079   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0428 17:07:29.264527   15128 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0428 17:07:29.277774   15128 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0428 17:07:29.287734   15128 start.go:562] Will wait 60s for crictl version
	I0428 17:07:29.298726   15128 ssh_runner.go:195] Run: which crictl
	I0428 17:07:29.316760   15128 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 17:07:29.366950   15128 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0428 17:07:29.376977   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.418646   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.453698   15128 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0428 17:07:29.453698   15128 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d9:46:b5 Flags:up|broadcast|multicast|running}
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: fe80::ddcc:c7f1:f829:ae2f/64
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: 172.27.224.1/20
	I0428 17:07:29.473489   15128 ssh_runner.go:195] Run: grep 172.27.224.1	host.minikube.internal$ /etc/hosts
	I0428 17:07:29.479885   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:29.514603   15128 kubeadm.go:877] updating cluster {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 17:07:29.514603   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:07:29.523620   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:29.550369   15128 docker.go:685] Got preloaded images: 
	I0428 17:07:29.550483   15128 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0428 17:07:29.562702   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:29.593952   15128 ssh_runner.go:195] Run: which lz4
	I0428 17:07:29.600117   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0428 17:07:29.613555   15128 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0428 17:07:29.619890   15128 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0428 17:07:29.619890   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0428 17:07:31.519069   15128 docker.go:649] duration metric: took 1.9189486s to copy over tarball
	I0428 17:07:31.533069   15128 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0428 17:07:40.472773   15128 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9396898s)
	I0428 17:07:40.472925   15128 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0428 17:07:40.541351   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:40.567273   15128 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0428 17:07:40.619221   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:40.837523   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:44.196770   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3592418s)
	I0428 17:07:44.207767   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:44.237423   15128 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0428 17:07:44.237484   15128 cache_images.go:84] Images are preloaded, skipping loading
	I0428 17:07:44.237484   15128 kubeadm.go:928] updating node { 172.27.226.61 8443 v1.30.0 docker true true} ...
	I0428 17:07:44.237484   15128 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-267500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.226.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 17:07:44.246763   15128 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0428 17:07:44.282127   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:07:44.282216   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:07:44.282216   15128 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 17:07:44.282351   15128 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.226.61 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-267500 NodeName:ha-267500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.226.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.226.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 17:07:44.282455   15128 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.226.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-267500"
	  kubeletExtraArgs:
	    node-ip: 172.27.226.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.226.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 17:07:44.282455   15128 kube-vip.go:111] generating kube-vip config ...
	I0428 17:07:44.297487   15128 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0428 17:07:44.321501   15128 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0428 17:07:44.322489   15128 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0428 17:07:44.337281   15128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 17:07:44.356448   15128 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 17:07:44.368828   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0428 17:07:44.388733   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0428 17:07:44.419285   15128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 17:07:44.454529   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0428 17:07:44.492910   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0428 17:07:44.535119   15128 ssh_runner.go:195] Run: grep 172.27.239.254	control-plane.minikube.internal$ /etc/hosts
	I0428 17:07:44.544353   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:44.584071   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:44.784658   15128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 17:07:44.813138   15128 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500 for IP: 172.27.226.61
	I0428 17:07:44.813138   15128 certs.go:194] generating shared ca certs ...
	I0428 17:07:44.813138   15128 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:44.814022   15128 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0428 17:07:44.814402   15128 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0428 17:07:44.814630   15128 certs.go:256] generating profile certs ...
	I0428 17:07:44.815376   15128 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key
	I0428 17:07:44.815452   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt with IP's: []
	I0428 17:07:45.207682   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt ...
	I0428 17:07:45.207682   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt: {Name:mkad69168dad75f83e0efa34e0b67056be851f25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.209661   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key ...
	I0428 17:07:45.209661   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key: {Name:mkb880ba41d02f89477ac0bc036a3238bb214c19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.210642   15128 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3
	I0428 17:07:45.211691   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.226.61 172.27.239.254]
	I0428 17:07:45.272240   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 ...
	I0428 17:07:45.272240   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3: {Name:mk99fb8942eac42f7e59971118a5e983aa693542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.273362   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 ...
	I0428 17:07:45.273362   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3: {Name:mkdcebf54b68db40ea28398d3bc9d7030e2380c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.274711   15128 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt
	I0428 17:07:45.286842   15128 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key
	I0428 17:07:45.287930   15128 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key
	I0428 17:07:45.288916   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt with IP's: []
	I0428 17:07:45.392345   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt ...
	I0428 17:07:45.392345   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt: {Name:mk043c6e778c0a46cac3b2815bc508f265aae077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.394630   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key ...
	I0428 17:07:45.394630   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key: {Name:mk9cbeba2bc7745cd3561dc98b61ab1be7e0e2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.395971   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 17:07:45.396701   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 17:07:45.396840   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 17:07:45.396982   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 17:07:45.397123   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 17:07:45.404414   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 17:07:45.405312   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem (1338 bytes)
	W0428 17:07:45.405975   15128 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228_empty.pem, impossibly tiny 0 bytes
	I0428 17:07:45.406015   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0428 17:07:45.406268   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0428 17:07:45.406623   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0428 17:07:45.406886   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0428 17:07:45.407157   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem (1708 bytes)
	I0428 17:07:45.407157   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /usr/share/ca-certificates/32282.pem
	I0428 17:07:45.407872   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:45.408049   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem -> /usr/share/ca-certificates/3228.pem
	I0428 17:07:45.408290   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 17:07:45.465598   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0428 17:07:45.514624   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 17:07:45.563309   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 17:07:45.610689   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0428 17:07:45.668205   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0428 17:07:45.709224   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 17:07:45.760227   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0428 17:07:45.808948   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /usr/share/ca-certificates/32282.pem (1708 bytes)
	I0428 17:07:45.867908   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 17:07:45.915616   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem --> /usr/share/ca-certificates/3228.pem (1338 bytes)
	I0428 17:07:45.964791   15128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 17:07:46.023214   15128 ssh_runner.go:195] Run: openssl version
	I0428 17:07:46.048823   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32282.pem && ln -fs /usr/share/ca-certificates/32282.pem /etc/ssl/certs/32282.pem"
	I0428 17:07:46.088573   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.097176   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.109096   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.132635   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32282.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 17:07:46.166258   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 17:07:46.204585   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.212881   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.228291   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.251359   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 17:07:46.286250   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3228.pem && ln -fs /usr/share/ca-certificates/3228.pem /etc/ssl/certs/3228.pem"
	I0428 17:07:46.330437   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.337213   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.348616   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.369695   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3228.pem /etc/ssl/certs/51391683.0"
	I0428 17:07:46.404629   15128 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 17:07:46.416103   15128 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 17:07:46.416103   15128 kubeadm.go:391] StartCluster: {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:07:46.427776   15128 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 17:07:46.462126   15128 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 17:07:46.492998   15128 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 17:07:46.525017   15128 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 17:07:46.543389   15128 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 17:07:46.543449   15128 kubeadm.go:156] found existing configuration files:
	
	I0428 17:07:46.559558   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 17:07:46.576906   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 17:07:46.591547   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 17:07:46.622617   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 17:07:46.643274   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 17:07:46.657479   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 17:07:46.687575   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.704724   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 17:07:46.717169   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.749254   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 17:07:46.767247   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 17:07:46.779268   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 17:07:46.798138   15128 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0428 17:07:47.295492   15128 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0428 17:08:03.206037   15128 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0428 17:08:03.206217   15128 kubeadm.go:309] [preflight] Running pre-flight checks
	I0428 17:08:03.206547   15128 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0428 17:08:03.206720   15128 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0428 17:08:03.207017   15128 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0428 17:08:03.207166   15128 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 17:08:03.211078   15128 out.go:204]   - Generating certificates and keys ...
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0428 17:08:03.212047   15128 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0428 17:08:03.212253   15128 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0428 17:08:03.212452   15128 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.213396   15128 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 17:08:03.214403   15128 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 17:08:03.214647   15128 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 17:08:03.217496   15128 out.go:204]   - Booting up control plane ...
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 17:08:03.218523   15128 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 17:08:03.218673   15128 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0428 17:08:03.218845   15128 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002004724s
	I0428 17:08:03.219380   15128 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0428 17:08:03.219512   15128 kubeadm.go:309] [api-check] The API server is healthy after 9.018382318s
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0428 17:08:03.219547   15128 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0428 17:08:03.219547   15128 kubeadm.go:309] [mark-control-plane] Marking the node ha-267500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0428 17:08:03.219547   15128 kubeadm.go:309] [bootstrap-token] Using token: o2t0fz.gqoxv8rhmbtgnafl
	I0428 17:08:03.222077   15128 out.go:204]   - Configuring RBAC rules ...
	I0428 17:08:03.223255   15128 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0428 17:08:03.223390   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0428 17:08:03.223700   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0428 17:08:03.224022   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0428 17:08:03.224356   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0428 17:08:03.224673   15128 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0428 17:08:03.224822   15128 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0428 17:08:03.224822   15128 kubeadm.go:309] 
	I0428 17:08:03.224822   15128 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0428 17:08:03.225393   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.226084   15128 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0428 17:08:03.226084   15128 kubeadm.go:309] 
	I0428 17:08:03.226252   15128 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0428 17:08:03.226279   15128 kubeadm.go:309] 
	I0428 17:08:03.226368   15128 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0428 17:08:03.226368   15128 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0428 17:08:03.226368   15128 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0428 17:08:03.226368   15128 kubeadm.go:309] 
	I0428 17:08:03.226941   15128 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0428 17:08:03.227102   15128 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0428 17:08:03.227102   15128 kubeadm.go:309] 
	I0428 17:08:03.227370   15128 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--control-plane 
	I0428 17:08:03.227509   15128 kubeadm.go:309] 
	I0428 17:08:03.227814   15128 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0428 17:08:03.227814   15128 kubeadm.go:309] 
	I0428 17:08:03.228020   15128 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.228020   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c 
	I0428 17:08:03.228020   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:08:03.228020   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:08:03.230920   15128 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0428 17:08:03.245586   15128 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0428 17:08:03.254991   15128 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0428 17:08:03.255049   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0428 17:08:03.307618   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0428 17:08:04.087321   15128 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 17:08:04.101185   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-267500 minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=ha-267500 minikube.k8s.io/primary=true
	I0428 17:08:04.110392   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.127454   15128 ops.go:34] apiserver oom_adj: -16
	I0428 17:08:04.338961   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.853452   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.339051   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.843300   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.345394   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.842588   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.347466   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.845426   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.343954   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.844666   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.346016   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.847106   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.346157   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.852073   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.350599   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.851124   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.339498   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.839469   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.341674   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.844363   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.340478   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.840892   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.351020   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.542789   15128 kubeadm.go:1107] duration metric: took 11.4553488s to wait for elevateKubeSystemPrivileges
	W0428 17:08:15.542884   15128 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0428 17:08:15.542948   15128 kubeadm.go:393] duration metric: took 29.1267984s to StartCluster
	I0428 17:08:15.542948   15128 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.543147   15128 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:15.545087   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.546714   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0428 17:08:15.546792   15128 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:15.546862   15128 start.go:240] waiting for startup goroutines ...
	I0428 17:08:15.546921   15128 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0428 17:08:15.547043   15128 addons.go:69] Setting storage-provisioner=true in profile "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:234] Setting addon storage-provisioner=true in "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:69] Setting default-storageclass=true in profile "ha-267500"
	I0428 17:08:15.547186   15128 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-267500"
	I0428 17:08:15.547186   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:15.547418   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.760123   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0428 17:08:16.117515   15128 start.go:946] {"host.minikube.internal": 172.27.224.1} host record injected into CoreDNS's ConfigMap
	I0428 17:08:17.727218   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.731020   15128 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 17:08:17.728718   15128 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:17.731866   15128 kapi.go:59] client config for ha-267500: &rest.Config{Host:"https://172.27.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 17:08:17.733765   15128 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:17.733849   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0428 17:08:17.733849   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:17.735131   15128 cert_rotation.go:137] Starting client certificate rotation controller
	I0428 17:08:17.735131   15128 addons.go:234] Setting addon default-storageclass=true in "ha-267500"
	I0428 17:08:17.735756   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:17.736495   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.022150   15128 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:20.022150   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.024648   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.176019   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:22.176993   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.177104   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.649653   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:22.838833   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:23.942043   15128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1032083s)
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:24.736869   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:24.878922   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:25.036824   15128 round_trippers.go:463] GET https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0428 17:08:25.036824   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.036824   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.036824   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.047850   15128 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0428 17:08:25.050270   15128 round_trippers.go:463] PUT https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0428 17:08:25.050270   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Content-Type: application/json
	I0428 17:08:25.050270   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.054895   15128 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 17:08:25.058644   15128 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0428 17:08:25.062323   15128 addons.go:505] duration metric: took 9.5154456s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0428 17:08:25.062323   15128 start.go:245] waiting for cluster config update ...
	I0428 17:08:25.062323   15128 start.go:254] writing updated cluster config ...
	I0428 17:08:25.064855   15128 out.go:177] 
	I0428 17:08:25.074876   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:25.074876   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.081680   15128 out.go:177] * Starting "ha-267500-m02" control-plane node in "ha-267500" cluster
	I0428 17:08:25.084831   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:08:25.084949   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:08:25.085245   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:08:25.085467   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:08:25.085668   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.089909   15128 start.go:360] acquireMachinesLock for ha-267500-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:08:25.089909   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500-m02"
	I0428 17:08:25.089909   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:25.089909   15128 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0428 17:08:25.092669   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:08:25.092669   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:08:25.092669   15128 client.go:168] LocalClient.Create starting
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:08:26.932082   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:08:26.932249   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:26.932469   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:08:28.625007   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:08:28.625741   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:28.625836   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:30.145128   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:30.145193   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:30.145352   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:33.641047   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:33.641341   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:33.643919   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:08:34.107074   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:08:34.283136   15128 main.go:141] libmachine: Creating VM...
	I0428 17:08:34.284168   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:37.085226   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:37.085497   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:38.799740   15128 main.go:141] libmachine: Creating VHD
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1C4811B2-F108-4C17-8C85-240087500FFB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:08:42.443176   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:08:45.530814   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -SizeBytes 20000MB
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:08:51.507051   15128 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-267500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:08:51.507121   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:51.507184   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500-m02 -DynamicMemoryEnabled $false
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:53.623959   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500-m02 -Count 2
	I0428 17:08:55.746706   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:55.747282   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:55.747376   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\boot2docker.iso'
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:58.231298   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd'
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: Starting VM...
	I0428 17:09:00.819246   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500-m02
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:08.535107   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:08.535676   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:09.540110   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:11.730252   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:11.730767   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:11.730896   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:14.267320   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:14.267920   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:15.278102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:17.429662   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:19.872667   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:19.873239   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:20.874059   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:23.049283   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:25.483021   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:25.483840   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:26.497330   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:28.593193   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:31.092830   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:33.155893   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:33.156190   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:33.156190   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:09:33.156343   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:37.708958   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:37.709094   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:37.715262   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:37.715453   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:37.715453   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:09:37.838307   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:09:37.838307   15128 buildroot.go:166] provisioning hostname "ha-267500-m02"
	I0428 17:09:37.838307   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:39.845337   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:39.845507   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:39.845582   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:42.372033   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:42.372654   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:42.379934   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:42.380083   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:42.380083   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500-m02 && echo "ha-267500-m02" | sudo tee /etc/hostname
	I0428 17:09:42.534583   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500-m02
	
	I0428 17:09:42.534727   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:44.674240   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:47.257595   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:47.258189   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:47.258189   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:09:47.404787   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:09:47.404787   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:09:47.404787   15128 buildroot.go:174] setting up certificates
	I0428 17:09:47.404787   15128 provision.go:84] configureAuth start
	I0428 17:09:47.404787   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:51.875853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:53.926853   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:53.927030   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:53.927102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:56.411706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:56.412682   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:56.412682   15128 provision.go:143] copyHostCerts
	I0428 17:09:56.412881   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:09:56.413201   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:09:56.413201   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:09:56.413699   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:09:56.414916   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:09:56.415172   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:09:56.417043   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:09:56.417043   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:09:56.417043   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:09:56.417691   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:09:56.418448   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500-m02 san=[127.0.0.1 172.27.238.86 ha-267500-m02 localhost minikube]
	I0428 17:09:56.698158   15128 provision.go:177] copyRemoteCerts
	I0428 17:09:56.713232   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:09:56.713232   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:58.727438   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:58.728437   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:58.728572   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:01.200219   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:01.303703   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5904121s)
	I0428 17:10:01.303703   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:10:01.304216   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:10:01.351115   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:10:01.351613   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0428 17:10:01.399941   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:10:01.400279   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:10:01.447643   15128 provision.go:87] duration metric: took 14.0428334s to configureAuth
	I0428 17:10:01.447643   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:10:01.448198   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:10:01.448388   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:03.470041   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:05.925618   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:05.926194   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:05.926194   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:10:06.056503   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:10:06.056605   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:10:06.056795   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:10:06.056855   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:08.084596   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:10.593844   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:10.594210   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:10.600708   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:10.601470   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:10.601470   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.226.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:10:10.751881   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.226.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:10:10.751947   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:12.904363   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:15.479691   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:15.479915   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:15.486849   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:15.487030   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:15.487030   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:10:17.663081   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:10:17.663081   15128 machine.go:97] duration metric: took 44.506824s to provisionDockerMachine
	I0428 17:10:17.663081   15128 client.go:171] duration metric: took 1m52.570239s to LocalClient.Create
	I0428 17:10:17.663081   15128 start.go:167] duration metric: took 1m52.570239s to libmachine.API.Create "ha-267500"
	I0428 17:10:17.663081   15128 start.go:293] postStartSetup for "ha-267500-m02" (driver="hyperv")
	I0428 17:10:17.663081   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:10:17.677002   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:10:17.677002   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:19.758853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:22.318985   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:22.423330   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7463207s)
	I0428 17:10:22.436053   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:10:22.443505   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:10:22.443505   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:10:22.444052   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:10:22.445207   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:10:22.445207   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:10:22.458722   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:10:22.477786   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:10:22.526087   15128 start.go:296] duration metric: took 4.8629979s for postStartSetup
	I0428 17:10:22.528901   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:27.084100   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:10:27.086385   15128 start.go:128] duration metric: took 2m1.9962875s to createHost
	I0428 17:10:27.086385   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:29.131174   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:31.572065   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:31.572369   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:31.578077   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:31.578656   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:31.578656   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349431.710726684
	
	I0428 17:10:31.707789   15128 fix.go:216] guest clock: 1714349431.710726684
	I0428 17:10:31.707789   15128 fix.go:229] Guest: 2024-04-28 17:10:31.710726684 -0700 PDT Remote: 2024-04-28 17:10:27.0863856 -0700 PDT m=+326.552805801 (delta=4.624341084s)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:36.218864   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:36.219399   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:36.219663   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349431
	I0428 17:10:36.353520   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:10:31 UTC 2024
	
	I0428 17:10:36.353602   15128 fix.go:236] clock set: Mon Apr 29 00:10:31 UTC 2024
	 (err=<nil>)
	I0428 17:10:36.353602   15128 start.go:83] releasing machines lock for "ha-267500-m02", held for 2m11.26349s
	I0428 17:10:36.353795   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:38.401891   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:40.883767   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:40.883929   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:40.887007   15128 out.go:177] * Found network options:
	I0428 17:10:40.889514   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.892316   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.894427   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.897007   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 17:10:40.898142   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.900035   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:10:40.900035   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:40.912127   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 17:10:40.913152   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:43.021173   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.602076   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.622078   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.622258   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.622506   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.694842   15128 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7816825s)
	W0428 17:10:45.694980   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:10:45.707857   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:10:45.811368   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:10:45.811368   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:45.811368   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.911325s)
	I0428 17:10:45.811813   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:45.869634   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:10:45.905032   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:10:45.930324   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:10:45.946027   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:10:45.978279   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.013710   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:10:46.061695   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.102008   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:10:46.135573   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:10:46.171642   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:10:46.204807   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:10:46.239021   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:10:46.271655   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:10:46.306942   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:46.514038   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:10:46.544941   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:46.560491   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:10:46.605547   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.654104   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:10:46.708544   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.748048   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.784762   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:10:46.849187   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.873497   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:46.927545   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:10:46.944545   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:10:46.962213   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:10:47.010730   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:10:47.237397   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:10:47.429784   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:10:47.429870   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:10:47.474822   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:47.662962   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:11:48.797471   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1344114s)
	I0428 17:11:48.811984   15128 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0428 17:11:48.846867   15128 out.go:177] 
	W0428 17:11:48.851004   15128 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 00:10:16 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.119534579Z" level=info msg="Starting up"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.120740894Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.121661806Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.164120251Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189883081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189945482Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190009182Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190026683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190220685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190263486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190520589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190669591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190716191Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190728492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190839193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.191192898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194247737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194367638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194558841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194663742Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194795944Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195368451Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195462552Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220446573Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220530874Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220815977Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220940379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220961379Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221231583Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221822990Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222033793Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222143394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222181895Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222200695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222229595Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222251396Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222320897Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222367097Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222383497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222398798Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222414398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222438198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222458898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222474399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222508799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222524499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222540899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222555500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222572000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222588200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222612300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222628301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222643801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222659801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222679401Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222703802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222745302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222782703Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222911604Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222975905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222992605Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223005105Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223156807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223197908Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223212708Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229340687Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229588390Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.230467901Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.231131810Z" level=info msg="containerd successfully booted in 0.070317s"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.196765446Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.225741894Z" level=info msg="Loading containers: start."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.520224287Z" level=info msg="Loading containers: done."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.548826467Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.549157372Z" level=info msg="Daemon has completed initialization"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663745997Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663852398Z" level=info msg="API listen on [::]:2376"
	Apr 29 00:10:17 ha-267500-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 00:10:47 ha-267500-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.694032846Z" level=info msg="Processing signal 'terminated'"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696514258Z" level=info msg="Daemon shutdown complete"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696708859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696755859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696775959Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:48 ha-267500-m02 dockerd[1016]: time="2024-04-29T00:10:48.770678285Z" level=info msg="Starting up"
	Apr 29 00:11:48 ha-267500-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0428 17:11:48.851004   15128 out.go:239] * 
	W0428 17:11:48.852842   15128 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 17:11:48.855427   15128 out.go:177] 
	
	
	==> Docker <==
	Apr 29 00:12:09 ha-267500 dockerd[1316]: 2024/04/29 00:12:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:12:09 ha-267500 dockerd[1316]: 2024/04/29 00:12:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:12:09 ha-267500 dockerd[1316]: 2024/04/29 00:12:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:12:09 ha-267500 dockerd[1316]: 2024/04/29 00:12:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:12:09 ha-267500 dockerd[1316]: 2024/04/29 00:12:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:12:09 ha-267500 dockerd[1316]: 2024/04/29 00:12:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:12:23 ha-267500 dockerd[1322]: time="2024-04-29T00:12:23.295389744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 00:12:23 ha-267500 dockerd[1322]: time="2024-04-29T00:12:23.295470544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 00:12:23 ha-267500 dockerd[1322]: time="2024-04-29T00:12:23.295484444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:12:23 ha-267500 dockerd[1322]: time="2024-04-29T00:12:23.295576143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:12:23 ha-267500 cri-dockerd[1225]: time="2024-04-29T00:12:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9e5d506c62d643e183acfa3bf809dae3fd3586a0c0e861873ab6dea691c8a1d2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 00:12:24 ha-267500 cri-dockerd[1225]: time="2024-04-29T00:12:24Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 29 00:12:24 ha-267500 dockerd[1322]: time="2024-04-29T00:12:24.879102600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 00:12:24 ha-267500 dockerd[1322]: time="2024-04-29T00:12:24.879417500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 00:12:24 ha-267500 dockerd[1322]: time="2024-04-29T00:12:24.879449000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:12:24 ha-267500 dockerd[1322]: time="2024-04-29T00:12:24.881361301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:24:11 ha-267500 dockerd[1316]: 2024/04/29 00:24:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:11 ha-267500 dockerd[1316]: 2024/04/29 00:24:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:11 ha-267500 dockerd[1316]: 2024/04/29 00:24:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8d1eabc40263       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   9e5d506c62d64       busybox-fc5497c4f-5xln2
	863860b786b42       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   c1f590ad490fe       coredns-7db6d8ff4d-p7tjz
	f85260746d557       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   586f91a6b0d3d       coredns-7db6d8ff4d-2d6ct
	f23ff280b691c       6e38f40d628db                                                                                         16 minutes ago      Running             storage-provisioner       0                   4f7c6837c24bd       storage-provisioner
	31e97721c439f       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              16 minutes ago      Running             kindnet-cni               0                   9a810f16fad2b       kindnet-6pr2b
	b505176bff8dd       a0bf559e280cf                                                                                         16 minutes ago      Running             kube-proxy                0                   f041e2ebf6955       kube-proxy-59kz7
	e8de8cc5d0941       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     16 minutes ago      Running             kube-vip                  0                   5e6adedaca2d1       kube-vip-ha-267500
	1bb77467f58fc       3861cfcd7c04c                                                                                         17 minutes ago      Running             etcd                      0                   bd2f63e7ff884       etcd-ha-267500
	e3f1a76ec8d43       c42f13656d0b2                                                                                         17 minutes ago      Running             kube-apiserver            0                   1aac39df0e147       kube-apiserver-ha-267500
	8e1e8e3ae83a4       259c8277fcbbc                                                                                         17 minutes ago      Running             kube-scheduler            0                   59e9e09e1fe2e       kube-scheduler-ha-267500
	988ba6e93dbd2       c7aad43836fa5                                                                                         17 minutes ago      Running             kube-controller-manager   0                   b062edd237fa4       kube-controller-manager-ha-267500
	
	
	==> coredns [863860b786b4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56042 - 38920 "HINFO IN 6310058863699759000.886894576477842994. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026858243s
	[INFO] 10.244.0.4:52183 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.109912239s
	[INFO] 10.244.0.4:36966 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.019781143s
	[INFO] 10.244.0.4:50436 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.124688347s
	[INFO] 10.244.0.4:39307 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231401s
	[INFO] 10.244.0.4:48774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000438101s
	[INFO] 10.244.0.4:55657 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001919s
	[INFO] 10.244.0.4:39536 - 5 "PTR IN 1.224.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000243301s
	
	
	==> coredns [f85260746d55] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52661 - 10332 "HINFO IN 6890724632724915343.2842102422429648823. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049972505s
	[INFO] 10.244.0.4:36002 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189801s
	[INFO] 10.244.0.4:39517 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002061s
	[INFO] 10.244.0.4:58443 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.132665688s
	[INFO] 10.244.0.4:58628 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000428701s
	[INFO] 10.244.0.4:35412 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002027s
	[INFO] 10.244.0.4:55943 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.02269265s
	[INFO] 10.244.0.4:41245 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000423501s
	[INFO] 10.244.0.4:57855 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168501s
	[INFO] 10.244.0.4:59251 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000973s
	[INFO] 10.244.0.4:49224 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000193501s
	[INFO] 10.244.0.4:39630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002705s
	[INFO] 10.244.0.4:33915 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000299901s
	[INFO] 10.244.0.4:44933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000954s
	
	
	==> describe nodes <==
	Name:               ha-267500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-267500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-267500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:08:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-267500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:24:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:22:50 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:22:50 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:22:50 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:22:50 +0000   Mon, 29 Apr 2024 00:08:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.226.61
	  Hostname:    ha-267500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 077cacd754b64c3dad0beeef28749850
	  System UUID:                961ce819-6c1b-c24a-99df-3205dca32605
	  Boot ID:                    bb08693c-1f82-4307-a58c-bdcce00f2d7a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5xln2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-2d6ct             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-p7tjz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-267500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-6pr2b                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-267500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-267500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-59kz7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-267500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-267500                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node ha-267500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node ha-267500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m   node-controller  Node ha-267500 event: Registered Node ha-267500 in Controller
	  Normal  NodeReady                16m   kubelet          Node ha-267500 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr29 00:06] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.760915] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.419480] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.183676] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[Apr29 00:07] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.112445] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.557599] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.220083] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.252325] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.857578] systemd-fstab-generator[1178]: Ignoring "noauto" option for root device
	[  +0.206645] systemd-fstab-generator[1190]: Ignoring "noauto" option for root device
	[  +0.195057] systemd-fstab-generator[1202]: Ignoring "noauto" option for root device
	[  +0.281554] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[ +11.671296] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.127733] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.851029] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +6.965698] systemd-fstab-generator[1723]: Ignoring "noauto" option for root device
	[  +0.101314] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.149606] kauditd_printk_skb: 67 callbacks suppressed
	[Apr29 00:08] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[ +14.798165] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.098725] kauditd_printk_skb: 29 callbacks suppressed
	[Apr29 00:12] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [1bb77467f58f] <==
	{"level":"info","ts":"2024-04-29T00:07:55.130032Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:07:55.135448Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:07:55.147187Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T00:07:55.148136Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ff0b9df26eb7be34","local-member-id":"c914a6e18288a53b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:07:55.150079Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:07:55.150305Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:07:55.161955Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:07:55.135369Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c914a6e18288a53b","local-member-attributes":"{Name:ha-267500 ClientURLs:[https://172.27.226.61:2379]}","request-path":"/0/members/c914a6e18288a53b/attributes","cluster-id":"ff0b9df26eb7be34","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T00:07:55.210715Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T00:07:55.218498Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T00:07:55.218434Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.226.61:2379"}
	2024/04/29 00:08:02 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-29T00:08:23.622806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.399264ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11906267438321679815 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" value_size:641 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T00:08:23.623008Z","caller":"traceutil/trace.go:171","msg":"trace[348685892] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"241.172088ms","start":"2024-04-29T00:08:23.381821Z","end":"2024-04-29T00:08:23.622994Z","steps":["trace[348685892] 'process raft request'  (duration: 19.993123ms)","trace[348685892] 'compare'  (duration: 220.074764ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T00:08:23.828397Z","caller":"traceutil/trace.go:171","msg":"trace[1858965510] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"194.829232ms","start":"2024-04-29T00:08:23.633549Z","end":"2024-04-29T00:08:23.828378Z","steps":["trace[1858965510] 'process raft request'  (duration: 188.756825ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:08:35.320112Z","caller":"traceutil/trace.go:171","msg":"trace[1514672440] transaction","detail":"{read_only:false; response_revision:469; number_of_response:1; }","duration":"139.755985ms","start":"2024-04-29T00:08:35.180333Z","end":"2024-04-29T00:08:35.320089Z","steps":["trace[1514672440] 'process raft request'  (duration: 139.641088ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:08:35.326839Z","caller":"traceutil/trace.go:171","msg":"trace[147414067] transaction","detail":"{read_only:false; response_revision:470; number_of_response:1; }","duration":"143.072983ms","start":"2024-04-29T00:08:35.183755Z","end":"2024-04-29T00:08:35.326828Z","steps":["trace[147414067] 'process raft request'  (duration: 142.867489ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:08:38.476431Z","caller":"traceutil/trace.go:171","msg":"trace[1245678643] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"108.116145ms","start":"2024-04-29T00:08:38.368296Z","end":"2024-04-29T00:08:38.476412Z","steps":["trace[1245678643] 'process raft request'  (duration: 108.004749ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:08:39.463077Z","caller":"traceutil/trace.go:171","msg":"trace[192678874] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"113.436313ms","start":"2024-04-29T00:08:39.349621Z","end":"2024-04-29T00:08:39.463057Z","steps":["trace[192678874] 'process raft request'  (duration: 113.028826ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:17:56.642961Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":964}
	{"level":"info","ts":"2024-04-29T00:17:56.66754Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":964,"took":"23.795536ms","hash":3071087103,"current-db-size-bytes":2490368,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2490368,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-04-29T00:17:56.667625Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3071087103,"revision":964,"compact-revision":-1}
	{"level":"info","ts":"2024-04-29T00:22:56.666326Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1501}
	{"level":"info","ts":"2024-04-29T00:22:56.675835Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1501,"took":"9.066919ms","hash":954752441,"current-db-size-bytes":2490368,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1806336,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-29T00:22:56.676016Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":954752441,"revision":1501,"compact-revision":964}
	
	
	==> kernel <==
	 00:24:56 up 19 min,  0 users,  load average: 0.07, 0.24, 0.30
	Linux ha-267500 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [31e97721c439] <==
	I0429 00:22:55.880110       1 main.go:227] handling current node
	I0429 00:23:05.895634       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:23:05.895778       1 main.go:227] handling current node
	I0429 00:23:15.909066       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:23:15.909161       1 main.go:227] handling current node
	I0429 00:23:25.922382       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:23:25.922424       1 main.go:227] handling current node
	I0429 00:23:35.935079       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:23:35.935169       1 main.go:227] handling current node
	I0429 00:23:45.947421       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:23:45.947544       1 main.go:227] handling current node
	I0429 00:23:55.963519       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:23:55.966958       1 main.go:227] handling current node
	I0429 00:24:05.973409       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:24:05.973504       1 main.go:227] handling current node
	I0429 00:24:15.991142       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:24:15.992411       1 main.go:227] handling current node
	I0429 00:24:26.002034       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:24:26.002077       1 main.go:227] handling current node
	I0429 00:24:36.016834       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:24:36.016994       1 main.go:227] handling current node
	I0429 00:24:46.023066       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:24:46.023177       1 main.go:227] handling current node
	I0429 00:24:56.043896       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:24:56.044118       1 main.go:227] handling current node
	
	
	==> kube-apiserver [e3f1a76ec8d4] <==
	I0429 00:07:59.267127       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0429 00:07:59.274054       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0429 00:07:59.274177       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 00:08:00.391728       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 00:08:00.499756       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 00:08:00.604894       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 00:08:00.617966       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.226.61]
	I0429 00:08:00.619341       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 00:08:00.626826       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 00:08:01.319490       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0429 00:08:02.484116       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0429 00:08:02.484213       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0429 00:08:02.484272       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.5µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0429 00:08:02.485404       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0429 00:08:02.486881       1 timeout.go:142] post-timeout activity - time-elapsed: 2.861712ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0429 00:08:02.642721       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 00:08:02.684736       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 00:08:02.712741       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 00:08:15.229730       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 00:08:15.308254       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 00:23:49.502033       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49293: use of closed network connection
	E0429 00:23:50.824153       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49301: use of closed network connection
	E0429 00:23:51.986308       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49309: use of closed network connection
	E0429 00:24:25.826543       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49329: use of closed network connection
	E0429 00:24:36.281538       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49332: use of closed network connection
	
	
	==> kube-controller-manager [988ba6e93dbd] <==
	I0429 00:08:15.313045       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 00:08:15.330719       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 00:08:15.755385       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="504.32555ms"
	I0429 00:08:15.767104       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 00:08:15.791065       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.201658ms"
	I0429 00:08:15.793718       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 00:08:15.797689       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 00:08:15.865935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.824341ms"
	I0429 00:08:15.866112       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="110.2µs"
	I0429 00:08:29.407024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="145.8µs"
	I0429 00:08:29.410999       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.9µs"
	I0429 00:08:29.438715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58µs"
	I0429 00:08:29.463289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.4µs"
	I0429 00:08:30.150197       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 00:08:32.178168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.562718ms"
	I0429 00:08:32.178767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="128.296µs"
	I0429 00:08:32.227761       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.198293ms"
	I0429 00:08:32.228518       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.397µs"
	I0429 00:12:22.804126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.965383ms"
	I0429 00:12:22.823038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.733135ms"
	I0429 00:12:22.823277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.2µs"
	I0429 00:12:22.828995       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.1µs"
	I0429 00:12:22.829468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.999µs"
	I0429 00:12:25.591541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.187606ms"
	I0429 00:12:25.591791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="155.1µs"
	
	
	==> kube-proxy [b505176bff8d] <==
	I0429 00:08:18.378677       1 server_linux.go:69] "Using iptables proxy"
	I0429 00:08:18.445828       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.226.61"]
	I0429 00:08:18.505105       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:08:18.505147       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:08:18.505201       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:08:18.511281       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:08:18.512271       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:08:18.512309       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:08:18.516363       1 config.go:192] "Starting service config controller"
	I0429 00:08:18.517198       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:08:18.517237       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:08:18.517245       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:08:18.524551       1 config.go:319] "Starting node config controller"
	I0429 00:08:18.524570       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:08:18.618172       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 00:08:18.618299       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:08:18.624657       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8e1e8e3ae83a] <==
	W0429 00:07:59.408672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 00:07:59.409434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 00:07:59.614629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.614883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.614630       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.616141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.671538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.671604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.688105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 00:07:59.688348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 00:07:59.699454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 00:07:59.699500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 00:07:59.827114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 00:07:59.827663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 00:07:59.863569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 00:07:59.864226       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 00:07:59.922434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 00:07:59.922488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 00:07:59.934988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 00:07:59.935206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 00:07:59.935823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 00:07:59.936001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 00:07:59.940321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 00:07:59.940831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0429 00:08:01.614591       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 00:20:02 ha-267500 kubelet[2223]: E0429 00:20:02.770410    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:20:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:20:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:20:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:20:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:21:02 ha-267500 kubelet[2223]: E0429 00:21:02.772410    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:21:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:21:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:21:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:21:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:22:02 ha-267500 kubelet[2223]: E0429 00:22:02.769460    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:22:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:22:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:22:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:22:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:23:02 ha-267500 kubelet[2223]: E0429 00:23:02.766809    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:23:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:23:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:23:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:23:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:24:02 ha-267500 kubelet[2223]: E0429 00:24:02.770172    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:24:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:24:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:24:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:24:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [f23ff280b691] <==
	I0429 00:08:31.052093       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 00:08:31.098437       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 00:08:31.104173       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 00:08:31.136173       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 00:08:31.136819       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-267500_c29b34a4-e5d1-441c-af40-1ba1265b4632!
	I0429 00:08:31.138081       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56b0b310-7342-47c9-9240-aab5b4e4fa99", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-267500_c29b34a4-e5d1-441c-af40-1ba1265b4632 became leader
	I0429 00:08:31.238456       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-267500_c29b34a4-e5d1-441c-af40-1ba1265b4632!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:24:48.706829    4288 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500
E0428 17:25:04.181577    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500: (11.6867368s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-267500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-jxx6x busybox-fc5497c4f-wg44s
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-267500 describe pod busybox-fc5497c4f-jxx6x busybox-fc5497c4f-wg44s
helpers_test.go:282: (dbg) kubectl --context ha-267500 describe pod busybox-fc5497c4f-jxx6x busybox-fc5497c4f-wg44s:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-jxx6x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wsmns (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-wsmns:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m37s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-fc5497c4f-wg44s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bv7kl (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-bv7kl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m37s (x4 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (44.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (258.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-267500 -v=7 --alsologtostderr
E0428 17:25:36.424177    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 17:26:59.600009    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-267500 -v=7 --alsologtostderr: (3m12.0619308s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr
E0428 17:28:41.016954    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr: exit status 2 (34.23247s)

                                                
                                                
-- stdout --
	ha-267500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-267500-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-267500-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:28:21.657681   14684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0428 17:28:21.665709   14684 out.go:291] Setting OutFile to fd 716 ...
	I0428 17:28:21.665709   14684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:28:21.665709   14684 out.go:304] Setting ErrFile to fd 1572...
	I0428 17:28:21.665709   14684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:28:21.685722   14684 out.go:298] Setting JSON to false
	I0428 17:28:21.685722   14684 mustload.go:65] Loading cluster: ha-267500
	I0428 17:28:21.685722   14684 notify.go:220] Checking for updates...
	I0428 17:28:21.686731   14684 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:28:21.686731   14684 status.go:255] checking status of ha-267500 ...
	I0428 17:28:21.688342   14684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:28:23.808890   14684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:28:23.809585   14684 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:28:23.809585   14684 status.go:330] ha-267500 host status = "Running" (err=<nil>)
	I0428 17:28:23.809676   14684 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:28:23.809839   14684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:28:25.885590   14684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:28:25.885590   14684 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:28:25.885685   14684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:28:28.379023   14684 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:28:28.379178   14684 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:28:28.379234   14684 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:28:28.394449   14684 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0428 17:28:28.394449   14684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:28:30.430924   14684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:28:30.431538   14684 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:28:30.431538   14684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:28:33.035900   14684 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:28:33.036051   14684 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:28:33.036298   14684 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:28:33.137785   14684 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.743328s)
	I0428 17:28:33.151094   14684 ssh_runner.go:195] Run: systemctl --version
	I0428 17:28:33.175315   14684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 17:28:33.208305   14684 kubeconfig.go:125] found "ha-267500" server: "https://172.27.239.254:8443"
	I0428 17:28:33.208411   14684 api_server.go:166] Checking apiserver status ...
	I0428 17:28:33.222755   14684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 17:28:33.269389   14684 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2086/cgroup
	W0428 17:28:33.294380   14684 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2086/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0428 17:28:33.307372   14684 ssh_runner.go:195] Run: ls
	I0428 17:28:33.314812   14684 api_server.go:253] Checking apiserver healthz at https://172.27.239.254:8443/healthz ...
	I0428 17:28:33.322074   14684 api_server.go:279] https://172.27.239.254:8443/healthz returned 200:
	ok
	I0428 17:28:33.327278   14684 status.go:422] ha-267500 apiserver status = Running (err=<nil>)
	I0428 17:28:33.327340   14684 status.go:257] ha-267500 status: &{Name:ha-267500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0428 17:28:33.327340   14684 status.go:255] checking status of ha-267500-m02 ...
	I0428 17:28:33.327967   14684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:28:35.391316   14684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:28:35.392088   14684 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:28:35.392088   14684 status.go:330] ha-267500-m02 host status = "Running" (err=<nil>)
	I0428 17:28:35.392088   14684 host.go:66] Checking if "ha-267500-m02" exists ...
	I0428 17:28:35.392856   14684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:28:37.452723   14684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:28:37.453649   14684 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:28:37.453649   14684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:28:39.913365   14684 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:28:39.913365   14684 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:28:39.913365   14684 host.go:66] Checking if "ha-267500-m02" exists ...
	I0428 17:28:39.929618   14684 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0428 17:28:39.929618   14684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:28:41.930744   14684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:28:41.930744   14684 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:28:41.931670   14684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:28:44.372816   14684 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:28:44.373001   14684 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:28:44.373076   14684 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:28:44.463448   14684 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.5338221s)
	I0428 17:28:44.477520   14684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 17:28:44.504285   14684 kubeconfig.go:125] found "ha-267500" server: "https://172.27.239.254:8443"
	I0428 17:28:44.504404   14684 api_server.go:166] Checking apiserver status ...
	I0428 17:28:44.517213   14684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0428 17:28:44.541800   14684 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0428 17:28:44.541933   14684 status.go:422] ha-267500-m02 apiserver status = Stopped (err=<nil>)
	I0428 17:28:44.541933   14684 status.go:257] ha-267500-m02 status: &{Name:ha-267500-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0428 17:28:44.541933   14684 status.go:255] checking status of ha-267500-m03 ...
	I0428 17:28:44.543367   14684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m03 ).state
	I0428 17:28:46.544615   14684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:28:46.544615   14684 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:28:46.544615   14684 status.go:330] ha-267500-m03 host status = "Running" (err=<nil>)
	I0428 17:28:46.544615   14684 host.go:66] Checking if "ha-267500-m03" exists ...
	I0428 17:28:46.546440   14684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m03 ).state
	I0428 17:28:48.611488   14684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:28:48.611488   14684 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:28:48.611488   14684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m03 ).networkadapters[0]).ipaddresses[0]
	I0428 17:28:51.048887   14684 main.go:141] libmachine: [stdout =====>] : 172.27.233.131
	
	I0428 17:28:51.049559   14684 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:28:51.049559   14684 host.go:66] Checking if "ha-267500-m03" exists ...
	I0428 17:28:51.063352   14684 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0428 17:28:51.063352   14684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m03 ).state
	I0428 17:28:53.137959   14684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:28:53.138516   14684 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:28:53.138516   14684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m03 ).networkadapters[0]).ipaddresses[0]
	I0428 17:28:55.590500   14684 main.go:141] libmachine: [stdout =====>] : 172.27.233.131
	
	I0428 17:28:55.591542   14684 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:28:55.591726   14684 sshutil.go:53] new ssh client: &{IP:172.27.233.131 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m03\id_rsa Username:docker}
	I0428 17:28:55.691799   14684 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6284393s)
	I0428 17:28:55.703785   14684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 17:28:55.730060   14684 status.go:257] ha-267500-m03 status: &{Name:ha-267500-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:236: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500: (11.4251683s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-267500 logs -n 25: (7.9731869s)
helpers_test.go:252: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT | 28 Apr 24 17:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT | 28 Apr 24 17:24 PDT |
	|         | busybox-fc5497c4f-5xln2              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-5xln2 -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.224.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-wg44s              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-267500 -v=7                | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:25 PDT | 28 Apr 24 17:28 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 17:05:00
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 17:05:00.635889   15128 out.go:291] Setting OutFile to fd 1448 ...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.636883   15128 out.go:304] Setting ErrFile to fd 980...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.660527   15128 out.go:298] Setting JSON to false
	I0428 17:05:00.664060   15128 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6543,"bootTime":1714342556,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 17:05:00.664060   15128 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 17:05:00.669160   15128 out.go:177] * [ha-267500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 17:05:00.673143   15128 notify.go:220] Checking for updates...
	I0428 17:05:00.675298   15128 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:05:00.677914   15128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 17:05:00.680526   15128 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 17:05:00.682871   15128 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 17:05:00.686326   15128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 17:05:00.689521   15128 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 17:05:05.728109   15128 out.go:177] * Using the hyperv driver based on user configuration
	I0428 17:05:05.733726   15128 start.go:297] selected driver: hyperv
	I0428 17:05:05.733726   15128 start.go:901] validating driver "hyperv" against <nil>
	I0428 17:05:05.733888   15128 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 17:05:05.779166   15128 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 17:05:05.780739   15128 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 17:05:05.780739   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:05:05.780739   15128 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0428 17:05:05.780739   15128 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0428 17:05:05.780739   15128 start.go:340] cluster config:
	{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:05:05.781443   15128 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 17:05:05.786272   15128 out.go:177] * Starting "ha-267500" primary control-plane node in "ha-267500" cluster
	I0428 17:05:05.789365   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:05:05.790343   15128 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 17:05:05.790343   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:05:05.790810   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:05:05.791000   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:05:05.791210   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:05:05.791210   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json: {Name:mk9d04dce876aeea74569e2a12d8158542a180a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:360] acquireMachinesLock for ha-267500: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500"
	I0428 17:05:05.793473   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:05:05.793473   15128 start.go:125] createHost starting for "" (driver="hyperv")
	I0428 17:05:05.798458   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:05:05.798458   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:05:05.799075   15128 client.go:168] LocalClient.Create starting
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:05:07.765342   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:05:07.765366   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:07.765483   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:05:09.466609   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:10.942750   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:14.309202   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:05:14.797607   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: Creating VM...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:17.596457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:17.596534   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:17.596629   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:05:17.596740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:19.370912   15128 main.go:141] libmachine: Creating VHD
	I0428 17:05:19.370912   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:05:22.987163   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6323F08D-1941-41F6-AECD-59FDB38477C4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:05:22.987787   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:22.987787   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:05:22.987950   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:05:22.997062   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:05:26.067081   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:26.067395   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:26.067482   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -SizeBytes 20000MB
	I0428 17:05:28.607147   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-267500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:32.186340   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500 -DynamicMemoryEnabled $false
	I0428 17:05:34.304828   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500 -Count 2
	I0428 17:05:36.364288   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:36.365155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:36.365244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\boot2docker.iso'
	I0428 17:05:38.788294   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd'
	I0428 17:05:41.250474   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: Starting VM...
	I0428 17:05:41.251660   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:48.796976   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:48.797051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:49.812421   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:51.911514   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:51.912240   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:51.912333   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:54.389553   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:54.389603   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:55.396985   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:57.532241   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:59.865311   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:59.865354   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:00.867371   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:06.311485   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:10.915736   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:10.916779   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:10.916848   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:12.945722   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:14.977649   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:17.403860   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:17.413822   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:17.413822   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:06:17.548827   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:06:17.549001   15128 buildroot.go:166] provisioning hostname "ha-267500"
	I0428 17:06:17.549001   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:21.963707   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:21.963891   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:21.969614   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:21.970234   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:21.970287   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500 && echo "ha-267500" | sudo tee /etc/hostname
	I0428 17:06:22.125673   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500
	
	I0428 17:06:22.125673   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:24.116148   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:26.498042   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:26.498298   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:26.504621   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:26.505426   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:26.505426   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:06:26.654593   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:06:26.654745   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:06:26.654745   15128 buildroot.go:174] setting up certificates
	I0428 17:06:26.654878   15128 provision.go:84] configureAuth start
	I0428 17:06:26.654974   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:28.643033   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:31.047712   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:33.032385   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:33.033114   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:33.033244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:35.470487   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:35.470551   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:35.470602   15128 provision.go:143] copyHostCerts
	I0428 17:06:35.470602   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:06:35.470602   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:06:35.470602   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:06:35.471409   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:06:35.472302   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:06:35.472302   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:06:35.474368   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:06:35.475508   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:06:35.477084   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500 san=[127.0.0.1 172.27.226.61 ha-267500 localhost minikube]
	I0428 17:06:35.561808   15128 provision.go:177] copyRemoteCerts
	I0428 17:06:35.577487   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:06:35.577487   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:37.564802   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:40.009619   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:06:40.122812   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5453174s)
	I0428 17:06:40.122812   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:06:40.124516   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:06:40.170921   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:06:40.171551   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0428 17:06:40.219603   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:06:40.219603   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:06:40.266084   15128 provision.go:87] duration metric: took 13.6111193s to configureAuth
	I0428 17:06:40.266084   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:06:40.266857   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:06:40.267021   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:42.241914   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:44.637923   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:44.637923   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:44.637923   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:06:44.774113   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:06:44.774113   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:06:44.774113   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:06:44.774650   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:46.777708   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:46.778317   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:46.778401   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:49.187437   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:49.187970   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:49.188102   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:06:49.338418   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:06:49.339201   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:51.331459   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:53.762358   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:53.763024   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:53.763024   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:06:55.964469   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:06:55.964469   15128 machine.go:97] duration metric: took 43.0186778s to provisionDockerMachine
	I0428 17:06:55.964469   15128 client.go:171] duration metric: took 1m50.1652174s to LocalClient.Create
	I0428 17:06:55.964469   15128 start.go:167] duration metric: took 1m50.1658343s to libmachine.API.Create "ha-267500"
	I0428 17:06:55.965115   15128 start.go:293] postStartSetup for "ha-267500" (driver="hyperv")
	I0428 17:06:55.965216   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:06:55.979546   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:06:55.979546   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:57.968316   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:57.969137   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:57.969264   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:00.415449   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:00.415502   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:00.415502   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:00.529139   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5495858s)
	I0428 17:07:00.542143   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:07:00.550032   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:07:00.550213   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:07:00.550570   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:07:00.551284   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:07:00.551284   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:07:00.565509   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:07:00.584743   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:07:00.629457   15128 start.go:296] duration metric: took 4.6642336s for postStartSetup
	I0428 17:07:00.635014   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:02.626728   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:02.627487   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:02.627874   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:05.092989   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:05.093104   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:05.093386   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:07:05.096398   15128 start.go:128] duration metric: took 1m59.3027333s to createHost
	I0428 17:07:05.096398   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:07.065139   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:07.066155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:07.066393   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:09.551453   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:09.552365   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:09.558305   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:09.559011   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:09.559011   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:07:09.695211   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349229.688972111
	
	I0428 17:07:09.695211   15128 fix.go:216] guest clock: 1714349229.688972111
	I0428 17:07:09.695293   15128 fix.go:229] Guest: 2024-04-28 17:07:09.688972111 -0700 PDT Remote: 2024-04-28 17:07:05.096398 -0700 PDT m=+124.563135001 (delta=4.592574111s)
	I0428 17:07:09.695407   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:11.789797   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:11.789847   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:11.789990   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:14.240619   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:14.240815   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:14.240815   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349229
	I0428 17:07:14.381527   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:07:09 UTC 2024
	
	I0428 17:07:14.381591   15128 fix.go:236] clock set: Mon Apr 29 00:07:09 UTC 2024
	 (err=<nil>)
	I0428 17:07:14.381591   15128 start.go:83] releasing machines lock for "ha-267500", held for 2m8.5881066s
	I0428 17:07:14.381888   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:16.379116   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:18.842518   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:07:18.842698   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:18.852567   15128 ssh_runner.go:195] Run: cat /version.json
	I0428 17:07:18.853571   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.911012   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:20.912913   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.913115   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.913211   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:23.515321   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.515423   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.515870   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.545848   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: cat /version.json: (4.8814384s)
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8914872s)
	I0428 17:07:23.747746   15128 ssh_runner.go:195] Run: systemctl --version
	I0428 17:07:23.771255   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 17:07:23.781524   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:07:23.793701   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:07:23.822613   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:07:23.822613   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:23.822613   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:23.866813   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:07:23.903238   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:07:23.922743   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:07:23.934150   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:07:23.963653   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:23.994818   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:07:24.027248   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:24.060207   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:07:24.094263   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:07:24.140407   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:07:24.173847   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:07:24.204942   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:07:24.241686   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:07:24.271540   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:24.469049   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:07:24.498779   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:24.511314   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:07:24.547731   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.585442   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:07:24.632453   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.665555   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.704256   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:07:24.766295   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.792824   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:24.839067   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:07:24.857950   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:07:24.877113   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:07:24.928235   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:07:25.145493   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:07:25.342459   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:07:25.342632   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:07:25.392872   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:25.606530   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:28.159251   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5517925s)
	I0428 17:07:28.171034   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0428 17:07:28.211210   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.251460   15128 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0428 17:07:28.457673   15128 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0428 17:07:28.655447   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:28.858401   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0428 17:07:28.905418   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.943568   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:29.150079   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0428 17:07:29.264527   15128 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0428 17:07:29.277774   15128 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0428 17:07:29.287734   15128 start.go:562] Will wait 60s for crictl version
	I0428 17:07:29.298726   15128 ssh_runner.go:195] Run: which crictl
	I0428 17:07:29.316760   15128 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 17:07:29.366950   15128 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0428 17:07:29.376977   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.418646   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.453698   15128 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0428 17:07:29.453698   15128 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d9:46:b5 Flags:up|broadcast|multicast|running}
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: fe80::ddcc:c7f1:f829:ae2f/64
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: 172.27.224.1/20
	I0428 17:07:29.473489   15128 ssh_runner.go:195] Run: grep 172.27.224.1	host.minikube.internal$ /etc/hosts
	I0428 17:07:29.479885   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:29.514603   15128 kubeadm.go:877] updating cluster {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 17:07:29.514603   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:07:29.523620   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:29.550369   15128 docker.go:685] Got preloaded images: 
	I0428 17:07:29.550483   15128 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0428 17:07:29.562702   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:29.593952   15128 ssh_runner.go:195] Run: which lz4
	I0428 17:07:29.600117   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0428 17:07:29.613555   15128 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0428 17:07:29.619890   15128 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0428 17:07:29.619890   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0428 17:07:31.519069   15128 docker.go:649] duration metric: took 1.9189486s to copy over tarball
	I0428 17:07:31.533069   15128 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0428 17:07:40.472773   15128 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9396898s)
	I0428 17:07:40.472925   15128 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0428 17:07:40.541351   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:40.567273   15128 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0428 17:07:40.619221   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:40.837523   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:44.196770   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3592418s)
	I0428 17:07:44.207767   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:44.237423   15128 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0428 17:07:44.237484   15128 cache_images.go:84] Images are preloaded, skipping loading
	I0428 17:07:44.237484   15128 kubeadm.go:928] updating node { 172.27.226.61 8443 v1.30.0 docker true true} ...
	I0428 17:07:44.237484   15128 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-267500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.226.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 17:07:44.246763   15128 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0428 17:07:44.282127   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:07:44.282216   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:07:44.282216   15128 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 17:07:44.282351   15128 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.226.61 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-267500 NodeName:ha-267500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.226.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.226.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 17:07:44.282455   15128 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.226.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-267500"
	  kubeletExtraArgs:
	    node-ip: 172.27.226.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.226.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 17:07:44.282455   15128 kube-vip.go:111] generating kube-vip config ...
	I0428 17:07:44.297487   15128 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0428 17:07:44.321501   15128 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0428 17:07:44.322489   15128 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0428 17:07:44.337281   15128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 17:07:44.356448   15128 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 17:07:44.368828   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0428 17:07:44.388733   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0428 17:07:44.419285   15128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 17:07:44.454529   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0428 17:07:44.492910   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0428 17:07:44.535119   15128 ssh_runner.go:195] Run: grep 172.27.239.254	control-plane.minikube.internal$ /etc/hosts
	I0428 17:07:44.544353   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:44.584071   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:44.784658   15128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 17:07:44.813138   15128 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500 for IP: 172.27.226.61
	I0428 17:07:44.813138   15128 certs.go:194] generating shared ca certs ...
	I0428 17:07:44.813138   15128 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:44.814022   15128 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0428 17:07:44.814402   15128 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0428 17:07:44.814630   15128 certs.go:256] generating profile certs ...
	I0428 17:07:44.815376   15128 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key
	I0428 17:07:44.815452   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt with IP's: []
	I0428 17:07:45.207682   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt ...
	I0428 17:07:45.207682   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt: {Name:mkad69168dad75f83e0efa34e0b67056be851f25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.209661   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key ...
	I0428 17:07:45.209661   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key: {Name:mkb880ba41d02f89477ac0bc036a3238bb214c19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.210642   15128 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3
	I0428 17:07:45.211691   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.226.61 172.27.239.254]
	I0428 17:07:45.272240   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 ...
	I0428 17:07:45.272240   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3: {Name:mk99fb8942eac42f7e59971118a5e983aa693542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.273362   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 ...
	I0428 17:07:45.273362   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3: {Name:mkdcebf54b68db40ea28398d3bc9d7030e2380c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.274711   15128 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt
	I0428 17:07:45.286842   15128 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key
	I0428 17:07:45.287930   15128 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key
	I0428 17:07:45.288916   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt with IP's: []
	I0428 17:07:45.392345   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt ...
	I0428 17:07:45.392345   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt: {Name:mk043c6e778c0a46cac3b2815bc508f265aae077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.394630   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key ...
	I0428 17:07:45.394630   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key: {Name:mk9cbeba2bc7745cd3561dc98b61ab1be7e0e2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.395971   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 17:07:45.396701   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 17:07:45.396840   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 17:07:45.396982   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 17:07:45.397123   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 17:07:45.404414   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 17:07:45.405312   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem (1338 bytes)
	W0428 17:07:45.405975   15128 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228_empty.pem, impossibly tiny 0 bytes
	I0428 17:07:45.406015   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0428 17:07:45.406268   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0428 17:07:45.406623   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0428 17:07:45.406886   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0428 17:07:45.407157   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem (1708 bytes)
	I0428 17:07:45.407157   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /usr/share/ca-certificates/32282.pem
	I0428 17:07:45.407872   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:45.408049   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem -> /usr/share/ca-certificates/3228.pem
	I0428 17:07:45.408290   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 17:07:45.465598   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0428 17:07:45.514624   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 17:07:45.563309   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 17:07:45.610689   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0428 17:07:45.668205   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0428 17:07:45.709224   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 17:07:45.760227   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0428 17:07:45.808948   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /usr/share/ca-certificates/32282.pem (1708 bytes)
	I0428 17:07:45.867908   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 17:07:45.915616   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem --> /usr/share/ca-certificates/3228.pem (1338 bytes)
	I0428 17:07:45.964791   15128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 17:07:46.023214   15128 ssh_runner.go:195] Run: openssl version
	I0428 17:07:46.048823   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32282.pem && ln -fs /usr/share/ca-certificates/32282.pem /etc/ssl/certs/32282.pem"
	I0428 17:07:46.088573   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.097176   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.109096   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.132635   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32282.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 17:07:46.166258   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 17:07:46.204585   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.212881   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.228291   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.251359   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 17:07:46.286250   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3228.pem && ln -fs /usr/share/ca-certificates/3228.pem /etc/ssl/certs/3228.pem"
	I0428 17:07:46.330437   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.337213   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.348616   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.369695   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3228.pem /etc/ssl/certs/51391683.0"
	I0428 17:07:46.404629   15128 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 17:07:46.416103   15128 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 17:07:46.416103   15128 kubeadm.go:391] StartCluster: {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:07:46.427776   15128 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 17:07:46.462126   15128 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 17:07:46.492998   15128 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 17:07:46.525017   15128 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 17:07:46.543389   15128 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 17:07:46.543449   15128 kubeadm.go:156] found existing configuration files:
	
	I0428 17:07:46.559558   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 17:07:46.576906   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 17:07:46.591547   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 17:07:46.622617   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 17:07:46.643274   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 17:07:46.657479   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 17:07:46.687575   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.704724   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 17:07:46.717169   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.749254   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 17:07:46.767247   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 17:07:46.779268   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 17:07:46.798138   15128 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0428 17:07:47.295492   15128 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0428 17:08:03.206037   15128 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0428 17:08:03.206217   15128 kubeadm.go:309] [preflight] Running pre-flight checks
	I0428 17:08:03.206547   15128 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0428 17:08:03.206720   15128 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0428 17:08:03.207017   15128 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0428 17:08:03.207166   15128 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 17:08:03.211078   15128 out.go:204]   - Generating certificates and keys ...
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0428 17:08:03.212047   15128 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0428 17:08:03.212253   15128 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0428 17:08:03.212452   15128 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.213396   15128 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 17:08:03.214403   15128 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 17:08:03.214647   15128 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 17:08:03.217496   15128 out.go:204]   - Booting up control plane ...
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 17:08:03.218523   15128 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 17:08:03.218673   15128 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0428 17:08:03.218845   15128 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002004724s
	I0428 17:08:03.219380   15128 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0428 17:08:03.219512   15128 kubeadm.go:309] [api-check] The API server is healthy after 9.018382318s
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0428 17:08:03.219547   15128 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0428 17:08:03.219547   15128 kubeadm.go:309] [mark-control-plane] Marking the node ha-267500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0428 17:08:03.219547   15128 kubeadm.go:309] [bootstrap-token] Using token: o2t0fz.gqoxv8rhmbtgnafl
	I0428 17:08:03.222077   15128 out.go:204]   - Configuring RBAC rules ...
	I0428 17:08:03.223255   15128 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0428 17:08:03.223390   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0428 17:08:03.223700   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0428 17:08:03.224022   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0428 17:08:03.224356   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0428 17:08:03.224673   15128 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0428 17:08:03.224822   15128 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0428 17:08:03.224822   15128 kubeadm.go:309] 
	I0428 17:08:03.224822   15128 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0428 17:08:03.225393   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.226084   15128 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0428 17:08:03.226084   15128 kubeadm.go:309] 
	I0428 17:08:03.226252   15128 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0428 17:08:03.226279   15128 kubeadm.go:309] 
	I0428 17:08:03.226368   15128 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0428 17:08:03.226368   15128 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0428 17:08:03.226368   15128 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0428 17:08:03.226368   15128 kubeadm.go:309] 
	I0428 17:08:03.226941   15128 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0428 17:08:03.227102   15128 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0428 17:08:03.227102   15128 kubeadm.go:309] 
	I0428 17:08:03.227370   15128 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--control-plane 
	I0428 17:08:03.227509   15128 kubeadm.go:309] 
	I0428 17:08:03.227814   15128 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0428 17:08:03.227814   15128 kubeadm.go:309] 
	I0428 17:08:03.228020   15128 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.228020   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c 
	I0428 17:08:03.228020   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:08:03.228020   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:08:03.230920   15128 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0428 17:08:03.245586   15128 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0428 17:08:03.254991   15128 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0428 17:08:03.255049   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0428 17:08:03.307618   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0428 17:08:04.087321   15128 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 17:08:04.101185   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-267500 minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=ha-267500 minikube.k8s.io/primary=true
	I0428 17:08:04.110392   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.127454   15128 ops.go:34] apiserver oom_adj: -16
	I0428 17:08:04.338961   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.853452   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.339051   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.843300   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.345394   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.842588   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.347466   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.845426   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.343954   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.844666   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.346016   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.847106   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.346157   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.852073   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.350599   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.851124   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.339498   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.839469   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.341674   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.844363   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.340478   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.840892   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.351020   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.542789   15128 kubeadm.go:1107] duration metric: took 11.4553488s to wait for elevateKubeSystemPrivileges
	W0428 17:08:15.542884   15128 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0428 17:08:15.542948   15128 kubeadm.go:393] duration metric: took 29.1267984s to StartCluster
	I0428 17:08:15.542948   15128 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.543147   15128 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:15.545087   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.546714   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0428 17:08:15.546792   15128 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:15.546862   15128 start.go:240] waiting for startup goroutines ...
	I0428 17:08:15.546921   15128 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0428 17:08:15.547043   15128 addons.go:69] Setting storage-provisioner=true in profile "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:234] Setting addon storage-provisioner=true in "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:69] Setting default-storageclass=true in profile "ha-267500"
	I0428 17:08:15.547186   15128 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-267500"
	I0428 17:08:15.547186   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:15.547418   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.760123   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0428 17:08:16.117515   15128 start.go:946] {"host.minikube.internal": 172.27.224.1} host record injected into CoreDNS's ConfigMap
	I0428 17:08:17.727218   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.731020   15128 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 17:08:17.728718   15128 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:17.731866   15128 kapi.go:59] client config for ha-267500: &rest.Config{Host:"https://172.27.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 17:08:17.733765   15128 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:17.733849   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0428 17:08:17.733849   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:17.735131   15128 cert_rotation.go:137] Starting client certificate rotation controller
	I0428 17:08:17.735131   15128 addons.go:234] Setting addon default-storageclass=true in "ha-267500"
	I0428 17:08:17.735756   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:17.736495   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.022150   15128 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:20.022150   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.024648   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.176019   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:22.176993   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.177104   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.649653   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:22.838833   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:23.942043   15128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1032083s)
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:24.736869   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:24.878922   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:25.036824   15128 round_trippers.go:463] GET https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0428 17:08:25.036824   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.036824   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.036824   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.047850   15128 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0428 17:08:25.050270   15128 round_trippers.go:463] PUT https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0428 17:08:25.050270   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Content-Type: application/json
	I0428 17:08:25.050270   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.054895   15128 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 17:08:25.058644   15128 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0428 17:08:25.062323   15128 addons.go:505] duration metric: took 9.5154456s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0428 17:08:25.062323   15128 start.go:245] waiting for cluster config update ...
	I0428 17:08:25.062323   15128 start.go:254] writing updated cluster config ...
	I0428 17:08:25.064855   15128 out.go:177] 
	I0428 17:08:25.074876   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:25.074876   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.081680   15128 out.go:177] * Starting "ha-267500-m02" control-plane node in "ha-267500" cluster
	I0428 17:08:25.084831   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:08:25.084949   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:08:25.085245   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:08:25.085467   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:08:25.085668   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.089909   15128 start.go:360] acquireMachinesLock for ha-267500-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:08:25.089909   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500-m02"
	I0428 17:08:25.089909   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:25.089909   15128 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0428 17:08:25.092669   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:08:25.092669   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:08:25.092669   15128 client.go:168] LocalClient.Create starting
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:08:26.932082   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:08:26.932249   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:26.932469   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:08:28.625007   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:08:28.625741   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:28.625836   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:30.145128   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:30.145193   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:30.145352   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:33.641047   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:33.641341   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:33.643919   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:08:34.107074   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:08:34.283136   15128 main.go:141] libmachine: Creating VM...
	I0428 17:08:34.284168   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:37.085226   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:37.085497   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:38.799740   15128 main.go:141] libmachine: Creating VHD
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1C4811B2-F108-4C17-8C85-240087500FFB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:08:42.443176   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:08:45.530814   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -SizeBytes 20000MB
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:08:51.507051   15128 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-267500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:08:51.507121   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:51.507184   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500-m02 -DynamicMemoryEnabled $false
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:53.623959   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500-m02 -Count 2
	I0428 17:08:55.746706   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:55.747282   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:55.747376   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\boot2docker.iso'
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:58.231298   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd'
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: Starting VM...
	I0428 17:09:00.819246   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500-m02
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:08.535107   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:08.535676   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:09.540110   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:11.730252   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:11.730767   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:11.730896   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:14.267320   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:14.267920   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:15.278102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:17.429662   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:19.872667   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:19.873239   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:20.874059   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:23.049283   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:25.483021   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:25.483840   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:26.497330   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:28.593193   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:31.092830   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:33.155893   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:33.156190   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:33.156190   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:09:33.156343   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:37.708958   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:37.709094   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:37.715262   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:37.715453   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:37.715453   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:09:37.838307   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:09:37.838307   15128 buildroot.go:166] provisioning hostname "ha-267500-m02"
	I0428 17:09:37.838307   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:39.845337   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:39.845507   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:39.845582   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:42.372033   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:42.372654   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:42.379934   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:42.380083   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:42.380083   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500-m02 && echo "ha-267500-m02" | sudo tee /etc/hostname
	I0428 17:09:42.534583   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500-m02
	
	I0428 17:09:42.534727   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:44.674240   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:47.257595   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:47.258189   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:47.258189   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:09:47.404787   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:09:47.404787   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:09:47.404787   15128 buildroot.go:174] setting up certificates
	I0428 17:09:47.404787   15128 provision.go:84] configureAuth start
	I0428 17:09:47.404787   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:51.875853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:53.926853   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:53.927030   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:53.927102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:56.411706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:56.412682   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:56.412682   15128 provision.go:143] copyHostCerts
	I0428 17:09:56.412881   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:09:56.413201   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:09:56.413201   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:09:56.413699   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:09:56.414916   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:09:56.415172   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:09:56.417043   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:09:56.417043   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:09:56.417043   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:09:56.417691   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:09:56.418448   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500-m02 san=[127.0.0.1 172.27.238.86 ha-267500-m02 localhost minikube]
	I0428 17:09:56.698158   15128 provision.go:177] copyRemoteCerts
	I0428 17:09:56.713232   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:09:56.713232   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:58.727438   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:58.728437   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:58.728572   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:01.200219   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:01.303703   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5904121s)
	I0428 17:10:01.303703   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:10:01.304216   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:10:01.351115   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:10:01.351613   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0428 17:10:01.399941   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:10:01.400279   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:10:01.447643   15128 provision.go:87] duration metric: took 14.0428334s to configureAuth
	I0428 17:10:01.447643   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:10:01.448198   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:10:01.448388   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:03.470041   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:05.925618   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:05.926194   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:05.926194   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:10:06.056503   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:10:06.056605   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:10:06.056795   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:10:06.056855   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:08.084596   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:10.593844   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:10.594210   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:10.600708   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:10.601470   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:10.601470   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.226.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:10:10.751881   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.226.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:10:10.751947   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:12.904363   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:15.479691   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:15.479915   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:15.486849   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:15.487030   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:15.487030   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:10:17.663081   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:10:17.663081   15128 machine.go:97] duration metric: took 44.506824s to provisionDockerMachine
	I0428 17:10:17.663081   15128 client.go:171] duration metric: took 1m52.570239s to LocalClient.Create
	I0428 17:10:17.663081   15128 start.go:167] duration metric: took 1m52.570239s to libmachine.API.Create "ha-267500"
	I0428 17:10:17.663081   15128 start.go:293] postStartSetup for "ha-267500-m02" (driver="hyperv")
	I0428 17:10:17.663081   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:10:17.677002   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:10:17.677002   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:19.758853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:22.318985   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:22.423330   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7463207s)
	I0428 17:10:22.436053   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:10:22.443505   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:10:22.443505   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:10:22.444052   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:10:22.445207   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:10:22.445207   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:10:22.458722   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:10:22.477786   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:10:22.526087   15128 start.go:296] duration metric: took 4.8629979s for postStartSetup
	I0428 17:10:22.528901   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:27.084100   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:10:27.086385   15128 start.go:128] duration metric: took 2m1.9962875s to createHost
	I0428 17:10:27.086385   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:29.131174   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:31.572065   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:31.572369   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:31.578077   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:31.578656   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:31.578656   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349431.710726684
	
	I0428 17:10:31.707789   15128 fix.go:216] guest clock: 1714349431.710726684
	I0428 17:10:31.707789   15128 fix.go:229] Guest: 2024-04-28 17:10:31.710726684 -0700 PDT Remote: 2024-04-28 17:10:27.0863856 -0700 PDT m=+326.552805801 (delta=4.624341084s)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:36.218864   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:36.219399   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:36.219663   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349431
	I0428 17:10:36.353520   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:10:31 UTC 2024
	
	I0428 17:10:36.353602   15128 fix.go:236] clock set: Mon Apr 29 00:10:31 UTC 2024
	 (err=<nil>)
	I0428 17:10:36.353602   15128 start.go:83] releasing machines lock for "ha-267500-m02", held for 2m11.26349s
	I0428 17:10:36.353795   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:38.401891   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:40.883767   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:40.883929   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:40.887007   15128 out.go:177] * Found network options:
	I0428 17:10:40.889514   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.892316   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.894427   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.897007   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 17:10:40.898142   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.900035   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:10:40.900035   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:40.912127   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 17:10:40.913152   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:43.021173   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.602076   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.622078   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.622258   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.622506   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.694842   15128 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7816825s)
	W0428 17:10:45.694980   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:10:45.707857   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:10:45.811368   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:10:45.811368   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:45.811368   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.911325s)
	I0428 17:10:45.811813   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:45.869634   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:10:45.905032   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:10:45.930324   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:10:45.946027   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:10:45.978279   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.013710   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:10:46.061695   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.102008   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:10:46.135573   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:10:46.171642   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:10:46.204807   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:10:46.239021   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:10:46.271655   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:10:46.306942   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:46.514038   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:10:46.544941   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:46.560491   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:10:46.605547   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.654104   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:10:46.708544   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.748048   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.784762   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:10:46.849187   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.873497   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:46.927545   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:10:46.944545   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:10:46.962213   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:10:47.010730   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:10:47.237397   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:10:47.429784   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:10:47.429870   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:10:47.474822   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:47.662962   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:11:48.797471   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1344114s)
	I0428 17:11:48.811984   15128 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0428 17:11:48.846867   15128 out.go:177] 
	W0428 17:11:48.851004   15128 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 00:10:16 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.119534579Z" level=info msg="Starting up"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.120740894Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.121661806Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.164120251Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189883081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189945482Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190009182Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190026683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190220685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190263486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190520589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190669591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190716191Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190728492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190839193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.191192898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194247737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194367638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194558841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194663742Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194795944Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195368451Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195462552Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220446573Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220530874Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220815977Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220940379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220961379Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221231583Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221822990Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222033793Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222143394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222181895Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222200695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222229595Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222251396Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222320897Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222367097Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222383497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222398798Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222414398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222438198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222458898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222474399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222508799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222524499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222540899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222555500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222572000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222588200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222612300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222628301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222643801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222659801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222679401Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222703802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222745302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222782703Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222911604Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222975905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222992605Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223005105Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223156807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223197908Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223212708Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229340687Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229588390Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.230467901Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.231131810Z" level=info msg="containerd successfully booted in 0.070317s"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.196765446Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.225741894Z" level=info msg="Loading containers: start."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.520224287Z" level=info msg="Loading containers: done."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.548826467Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.549157372Z" level=info msg="Daemon has completed initialization"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663745997Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663852398Z" level=info msg="API listen on [::]:2376"
	Apr 29 00:10:17 ha-267500-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 00:10:47 ha-267500-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.694032846Z" level=info msg="Processing signal 'terminated'"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696514258Z" level=info msg="Daemon shutdown complete"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696708859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696755859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696775959Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:48 ha-267500-m02 dockerd[1016]: time="2024-04-29T00:10:48.770678285Z" level=info msg="Starting up"
	Apr 29 00:11:48 ha-267500-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0428 17:11:48.851004   15128 out.go:239] * 
	W0428 17:11:48.852842   15128 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 17:11:48.855427   15128 out.go:177] 
	
	
	==> Docker <==
	Apr 29 00:12:23 ha-267500 dockerd[1322]: time="2024-04-29T00:12:23.295576143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:12:23 ha-267500 cri-dockerd[1225]: time="2024-04-29T00:12:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9e5d506c62d643e183acfa3bf809dae3fd3586a0c0e861873ab6dea691c8a1d2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 00:12:24 ha-267500 cri-dockerd[1225]: time="2024-04-29T00:12:24Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 29 00:12:24 ha-267500 dockerd[1322]: time="2024-04-29T00:12:24.879102600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 00:12:24 ha-267500 dockerd[1322]: time="2024-04-29T00:12:24.879417500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 00:12:24 ha-267500 dockerd[1322]: time="2024-04-29T00:12:24.879449000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:12:24 ha-267500 dockerd[1322]: time="2024-04-29T00:12:24.881361301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 00:24:11 ha-267500 dockerd[1316]: 2024/04/29 00:24:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:11 ha-267500 dockerd[1316]: 2024/04/29 00:24:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:11 ha-267500 dockerd[1316]: 2024/04/29 00:24:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:55 ha-267500 dockerd[1316]: 2024/04/29 00:24:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8d1eabc40263       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   16 minutes ago      Running             busybox                   0                   9e5d506c62d64       busybox-fc5497c4f-5xln2
	863860b786b42       cbb01a7bd410d                                                                                         20 minutes ago      Running             coredns                   0                   c1f590ad490fe       coredns-7db6d8ff4d-p7tjz
	f85260746d557       cbb01a7bd410d                                                                                         20 minutes ago      Running             coredns                   0                   586f91a6b0d3d       coredns-7db6d8ff4d-2d6ct
	f23ff280b691c       6e38f40d628db                                                                                         20 minutes ago      Running             storage-provisioner       0                   4f7c6837c24bd       storage-provisioner
	31e97721c439f       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              20 minutes ago      Running             kindnet-cni               0                   9a810f16fad2b       kindnet-6pr2b
	b505176bff8dd       a0bf559e280cf                                                                                         20 minutes ago      Running             kube-proxy                0                   f041e2ebf6955       kube-proxy-59kz7
	e8de8cc5d0941       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     21 minutes ago      Running             kube-vip                  0                   5e6adedaca2d1       kube-vip-ha-267500
	1bb77467f58fc       3861cfcd7c04c                                                                                         21 minutes ago      Running             etcd                      0                   bd2f63e7ff884       etcd-ha-267500
	e3f1a76ec8d43       c42f13656d0b2                                                                                         21 minutes ago      Running             kube-apiserver            0                   1aac39df0e147       kube-apiserver-ha-267500
	8e1e8e3ae83a4       259c8277fcbbc                                                                                         21 minutes ago      Running             kube-scheduler            0                   59e9e09e1fe2e       kube-scheduler-ha-267500
	988ba6e93dbd2       c7aad43836fa5                                                                                         21 minutes ago      Running             kube-controller-manager   0                   b062edd237fa4       kube-controller-manager-ha-267500
	
	
	==> coredns [863860b786b4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56042 - 38920 "HINFO IN 6310058863699759000.886894576477842994. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026858243s
	[INFO] 10.244.0.4:52183 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.109912239s
	[INFO] 10.244.0.4:36966 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.019781143s
	[INFO] 10.244.0.4:50436 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.124688347s
	[INFO] 10.244.0.4:39307 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231401s
	[INFO] 10.244.0.4:48774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000438101s
	[INFO] 10.244.0.4:55657 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001919s
	[INFO] 10.244.0.4:39536 - 5 "PTR IN 1.224.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000243301s
	
	
	==> coredns [f85260746d55] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52661 - 10332 "HINFO IN 6890724632724915343.2842102422429648823. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049972505s
	[INFO] 10.244.0.4:36002 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189801s
	[INFO] 10.244.0.4:39517 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002061s
	[INFO] 10.244.0.4:58443 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.132665688s
	[INFO] 10.244.0.4:58628 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000428701s
	[INFO] 10.244.0.4:35412 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002027s
	[INFO] 10.244.0.4:55943 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.02269265s
	[INFO] 10.244.0.4:41245 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000423501s
	[INFO] 10.244.0.4:57855 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168501s
	[INFO] 10.244.0.4:59251 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000973s
	[INFO] 10.244.0.4:49224 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000193501s
	[INFO] 10.244.0.4:39630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002705s
	[INFO] 10.244.0.4:33915 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000299901s
	[INFO] 10.244.0.4:44933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000954s
	
	
	==> describe nodes <==
	Name:               ha-267500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-267500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-267500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:08:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-267500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:29:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:27:57 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:27:57 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:27:57 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:27:57 +0000   Mon, 29 Apr 2024 00:08:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.226.61
	  Hostname:    ha-267500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 077cacd754b64c3dad0beeef28749850
	  System UUID:                961ce819-6c1b-c24a-99df-3205dca32605
	  Boot ID:                    bb08693c-1f82-4307-a58c-bdcce00f2d7a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5xln2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 coredns-7db6d8ff4d-2d6ct             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 coredns-7db6d8ff4d-p7tjz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-ha-267500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kindnet-6pr2b                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	  kube-system                 kube-apiserver-ha-267500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-ha-267500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-59kz7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-ha-267500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-vip-ha-267500                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 20m   kube-proxy       
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node ha-267500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node ha-267500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m   node-controller  Node ha-267500 event: Registered Node ha-267500 in Controller
	  Normal  NodeReady                20m   kubelet          Node ha-267500 status is now: NodeReady
	
	
	Name:               ha-267500-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-267500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-267500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_28T17_28_02_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:28:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-267500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:29:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.233.131
	  Hostname:    ha-267500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0562d38ef374b73969ab15fed947e11
	  System UUID:                c94a104a-b670-854e-ac89-f41b3533cc69
	  Boot ID:                    bca10429-bddd-4547-8fb0-c50d93740969
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jxx6x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-mspbr              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      72s
	  kube-system                 kube-proxy-jcph5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 62s                kube-proxy       
	  Normal  NodeHasSufficientMemory  72s (x2 over 72s)  kubelet          Node ha-267500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s (x2 over 72s)  kubelet          Node ha-267500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s (x2 over 72s)  kubelet          Node ha-267500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  72s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           69s                node-controller  Node ha-267500-m03 event: Registered Node ha-267500-m03 in Controller
	  Normal  NodeReady                55s                kubelet          Node ha-267500-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr29 00:06] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.760915] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.419480] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.183676] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[Apr29 00:07] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.112445] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.557599] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.220083] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.252325] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.857578] systemd-fstab-generator[1178]: Ignoring "noauto" option for root device
	[  +0.206645] systemd-fstab-generator[1190]: Ignoring "noauto" option for root device
	[  +0.195057] systemd-fstab-generator[1202]: Ignoring "noauto" option for root device
	[  +0.281554] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[ +11.671296] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.127733] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.851029] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +6.965698] systemd-fstab-generator[1723]: Ignoring "noauto" option for root device
	[  +0.101314] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.149606] kauditd_printk_skb: 67 callbacks suppressed
	[Apr29 00:08] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[ +14.798165] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.098725] kauditd_printk_skb: 29 callbacks suppressed
	[Apr29 00:12] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [1bb77467f58f] <==
	{"level":"warn","ts":"2024-04-29T00:27:56.3113Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:27:55.726454Z","time spent":"584.410622ms","remote":"127.0.0.1:52796","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":420,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:2570 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:370 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >"}
	{"level":"info","ts":"2024-04-29T00:27:56.685502Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2039}
	{"level":"info","ts":"2024-04-29T00:27:56.697671Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2039,"took":"11.701828ms","hash":3710382387,"current-db-size-bytes":2490368,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1806336,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-29T00:27:56.697806Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3710382387,"revision":2039,"compact-revision":1501}
	{"level":"warn","ts":"2024-04-29T00:28:01.015001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.747946ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11906267438321686993 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2563 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911183 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T00:28:01.015171Z","caller":"traceutil/trace.go:171","msg":"trace[1066403778] linearizableReadLoop","detail":"{readStateIndex:2839; appliedIndex:2838; }","duration":"166.002504ms","start":"2024-04-29T00:28:00.849156Z","end":"2024-04-29T00:28:01.015158Z","steps":["trace[1066403778] 'read index received'  (duration: 64.927058ms)","trace[1066403778] 'applied index is now lower than readState.Index'  (duration: 101.074346ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T00:28:01.015567Z","caller":"traceutil/trace.go:171","msg":"trace[1444860087] transaction","detail":"{read_only:false; response_revision:2582; number_of_response:1; }","duration":"309.347954ms","start":"2024-04-29T00:28:00.706202Z","end":"2024-04-29T00:28:01.01555Z","steps":["trace[1444860087] 'process raft request'  (duration: 207.946307ms)","trace[1444860087] 'compare'  (duration: 100.659345ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:01.015577Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.380105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/kube-system/bootstrap-token-46antb\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:01.015843Z","caller":"traceutil/trace.go:171","msg":"trace[1574012410] range","detail":"{range_begin:/registry/secrets/kube-system/bootstrap-token-46antb; range_end:; response_count:0; response_revision:2582; }","duration":"166.706906ms","start":"2024-04-29T00:28:00.849128Z","end":"2024-04-29T00:28:01.015834Z","steps":["trace[1574012410] 'agreement among raft nodes before linearized reading'  (duration: 166.065204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:01.015715Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:00.706185Z","time spent":"309.436654ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2563 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911183 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >"}
	{"level":"info","ts":"2024-04-29T00:28:01.13236Z","caller":"traceutil/trace.go:171","msg":"trace[848518735] transaction","detail":"{read_only:false; response_revision:2583; number_of_response:1; }","duration":"106.51056ms","start":"2024-04-29T00:28:01.02575Z","end":"2024-04-29T00:28:01.132261Z","steps":["trace[848518735] 'process raft request'  (duration: 100.002844ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:28:10.719709Z","caller":"traceutil/trace.go:171","msg":"trace[688876790] transaction","detail":"{read_only:false; response_revision:2633; number_of_response:1; }","duration":"131.602022ms","start":"2024-04-29T00:28:10.588085Z","end":"2024-04-29T00:28:10.719687Z","steps":["trace[688876790] 'process raft request'  (duration: 131.335422ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:11.057116Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.908169ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11906267438321687140 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:253b8f272df6da63>","response":"size:41"}
	{"level":"info","ts":"2024-04-29T00:28:11.057309Z","caller":"traceutil/trace.go:171","msg":"trace[730869850] linearizableReadLoop","detail":"{readStateIndex:2894; appliedIndex:2893; }","duration":"310.80146ms","start":"2024-04-29T00:28:10.746493Z","end":"2024-04-29T00:28:11.057294Z","steps":["trace[730869850] 'read index received'  (duration: 118.63939ms)","trace[730869850] 'applied index is now lower than readState.Index'  (duration: 192.16047ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.057392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"310.91436ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:11.057434Z","caller":"traceutil/trace.go:171","msg":"trace[965932074] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2633; }","duration":"310.98056ms","start":"2024-04-29T00:28:10.746443Z","end":"2024-04-29T00:28:11.057424Z","steps":["trace[965932074] 'agreement among raft nodes before linearized reading'  (duration: 310.91126ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:11.057458Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:10.746429Z","time spent":"311.02236ms","remote":"127.0.0.1:52498","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-04-29T00:28:11.057874Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:10.721431Z","time spent":"336.441923ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-04-29T00:28:11.410781Z","caller":"traceutil/trace.go:171","msg":"trace[921900368] linearizableReadLoop","detail":"{readStateIndex:2895; appliedIndex:2894; }","duration":"284.369895ms","start":"2024-04-29T00:28:11.126395Z","end":"2024-04-29T00:28:11.410765Z","steps":["trace[921900368] 'read index received'  (duration: 193.861274ms)","trace[921900368] 'applied index is now lower than readState.Index'  (duration: 90.507421ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.411124Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.711696ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-267500-m03\" ","response":"range_response_count:1 size:2813"}
	{"level":"info","ts":"2024-04-29T00:28:11.41123Z","caller":"traceutil/trace.go:171","msg":"trace[1500780481] range","detail":"{range_begin:/registry/minions/ha-267500-m03; range_end:; response_count:1; response_revision:2634; }","duration":"284.831096ms","start":"2024-04-29T00:28:11.126391Z","end":"2024-04-29T00:28:11.411222Z","steps":["trace[1500780481] 'agreement among raft nodes before linearized reading'  (duration: 284.474795ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:28:11.411809Z","caller":"traceutil/trace.go:171","msg":"trace[1062724437] transaction","detail":"{read_only:false; response_revision:2634; number_of_response:1; }","duration":"351.77576ms","start":"2024-04-29T00:28:11.059046Z","end":"2024-04-29T00:28:11.410821Z","steps":["trace[1062724437] 'process raft request'  (duration: 261.137839ms)","trace[1062724437] 'compare'  (duration: 90.397121ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.412239Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:11.059032Z","time spent":"352.927263ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2582 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911331 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >"}
	{"level":"warn","ts":"2024-04-29T00:28:16.429655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.224744ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:16.429988Z","caller":"traceutil/trace.go:171","msg":"trace[1266991256] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2651; }","duration":"181.540745ms","start":"2024-04-29T00:28:16.248407Z","end":"2024-04-29T00:28:16.429948Z","steps":["trace[1266991256] 'range keys from in-memory index tree'  (duration: 181.210444ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:29:14 up 23 min,  0 users,  load average: 0.28, 0.39, 0.35
	Linux ha-267500 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [31e97721c439] <==
	I0429 00:28:06.329175       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.27.233.131 Flags: [] Table: 0} 
	I0429 00:28:16.433586       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:28:16.433849       1 main.go:227] handling current node
	I0429 00:28:16.434062       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:28:16.434277       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:28:26.441340       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:28:26.441447       1 main.go:227] handling current node
	I0429 00:28:26.441471       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:28:26.441479       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:28:36.449536       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:28:36.449670       1 main.go:227] handling current node
	I0429 00:28:36.449684       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:28:36.449692       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:28:46.456374       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:28:46.456747       1 main.go:227] handling current node
	I0429 00:28:46.456845       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:28:46.456857       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:28:56.471761       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:28:56.471864       1 main.go:227] handling current node
	I0429 00:28:56.472011       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:28:56.472045       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:29:06.487195       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:29:06.487288       1 main.go:227] handling current node
	I0429 00:29:06.487301       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:29:06.487309       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e3f1a76ec8d4] <==
	I0429 00:08:00.626826       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 00:08:01.319490       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0429 00:08:02.484116       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0429 00:08:02.484213       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0429 00:08:02.484272       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.5µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0429 00:08:02.485404       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0429 00:08:02.486881       1 timeout.go:142] post-timeout activity - time-elapsed: 2.861712ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0429 00:08:02.642721       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 00:08:02.684736       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 00:08:02.712741       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 00:08:15.229730       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 00:08:15.308254       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 00:23:49.502033       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49293: use of closed network connection
	E0429 00:23:50.824153       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49301: use of closed network connection
	E0429 00:23:51.986308       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49309: use of closed network connection
	E0429 00:24:25.826543       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49329: use of closed network connection
	E0429 00:24:36.281538       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49332: use of closed network connection
	I0429 00:27:56.312329       1 trace.go:236] Trace[1132022318]: "Update" accept:application/json, */*,audit-id:b430ffa2-60e5-4395-a53d-a8ebd619d367,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (29-Apr-2024 00:27:55.724) (total time: 587ms):
	Trace[1132022318]: ["GuaranteedUpdate etcd3" audit-id:b430ffa2-60e5-4395-a53d-a8ebd619d367,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 587ms (00:27:55.725)
	Trace[1132022318]:  ---"Txn call completed" 586ms (00:27:56.312)]
	Trace[1132022318]: [587.55203ms] [587.55203ms] END
	I0429 00:28:11.413223       1 trace.go:236] Trace[768089845]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.27.226.61,type:*v1.Endpoints,resource:apiServerIPInfo (29-Apr-2024 00:28:10.678) (total time: 734ms):
	Trace[768089845]: ---"Transaction prepared" 338ms (00:28:11.058)
	Trace[768089845]: ---"Txn call completed" 354ms (00:28:11.413)
	Trace[768089845]: [734.530496ms] [734.530496ms] END
	
	
	==> kube-controller-manager [988ba6e93dbd] <==
	I0429 00:08:29.407024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="145.8µs"
	I0429 00:08:29.410999       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.9µs"
	I0429 00:08:29.438715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58µs"
	I0429 00:08:29.463289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.4µs"
	I0429 00:08:30.150197       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 00:08:32.178168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.562718ms"
	I0429 00:08:32.178767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="128.296µs"
	I0429 00:08:32.227761       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.198293ms"
	I0429 00:08:32.228518       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.397µs"
	I0429 00:12:22.804126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.965383ms"
	I0429 00:12:22.823038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.733135ms"
	I0429 00:12:22.823277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.2µs"
	I0429 00:12:22.828995       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.1µs"
	I0429 00:12:22.829468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.999µs"
	I0429 00:12:25.591541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.187606ms"
	I0429 00:12:25.591791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="155.1µs"
	I0429 00:28:02.170352       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-267500-m03\" does not exist"
	I0429 00:28:02.230498       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-267500-m03" podCIDRs=["10.244.1.0/24"]
	I0429 00:28:05.393266       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-267500-m03"
	I0429 00:28:19.456843       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-267500-m03"
	I0429 00:28:19.485470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.1µs"
	I0429 00:28:19.487549       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.6µs"
	I0429 00:28:19.505362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.4µs"
	I0429 00:28:22.722440       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.931424ms"
	I0429 00:28:22.722950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.301µs"
	
	
	==> kube-proxy [b505176bff8d] <==
	I0429 00:08:18.378677       1 server_linux.go:69] "Using iptables proxy"
	I0429 00:08:18.445828       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.226.61"]
	I0429 00:08:18.505105       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:08:18.505147       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:08:18.505201       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:08:18.511281       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:08:18.512271       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:08:18.512309       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:08:18.516363       1 config.go:192] "Starting service config controller"
	I0429 00:08:18.517198       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:08:18.517237       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:08:18.517245       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:08:18.524551       1 config.go:319] "Starting node config controller"
	I0429 00:08:18.524570       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:08:18.618172       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 00:08:18.618299       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:08:18.624657       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8e1e8e3ae83a] <==
	W0429 00:07:59.408672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 00:07:59.409434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 00:07:59.614629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.614883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.614630       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.616141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.671538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.671604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.688105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 00:07:59.688348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 00:07:59.699454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 00:07:59.699500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 00:07:59.827114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 00:07:59.827663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 00:07:59.863569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 00:07:59.864226       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 00:07:59.922434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 00:07:59.922488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 00:07:59.934988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 00:07:59.935206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 00:07:59.935823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 00:07:59.936001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 00:07:59.940321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 00:07:59.940831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0429 00:08:01.614591       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 00:25:02 ha-267500 kubelet[2223]: E0429 00:25:02.768338    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:25:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:25:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:25:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:25:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:26:02 ha-267500 kubelet[2223]: E0429 00:26:02.769707    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:26:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:26:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:26:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:26:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:27:02 ha-267500 kubelet[2223]: E0429 00:27:02.769330    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:27:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:27:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:27:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:27:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:28:02 ha-267500 kubelet[2223]: E0429 00:28:02.772180    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:28:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:28:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:28:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:28:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:29:02 ha-267500 kubelet[2223]: E0429 00:29:02.767197    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:29:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:29:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:29:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:29:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:29:07.304718   11012 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500: (11.8387841s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-267500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-wg44s
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-267500 describe pod busybox-fc5497c4f-wg44s
helpers_test.go:282: (dbg) kubectl --context ha-267500 describe pod busybox-fc5497c4f-wg44s:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-wg44s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bv7kl (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-bv7kl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  116s (x5 over 17m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  59s (x2 over 69s)   default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (258.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (50.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (17.9610687s)
ha_test.go:304: expected profile "ha-267500" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-267500\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-267500\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-267500\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"172.27.239.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.27.226.61\",\"Port\":8443,\"KubernetesVersion
\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"172.27.238.86\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"172.27.233.131\",\"Port\":0,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\
":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube1:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"Di
sableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
ha_test.go:307: expected profile "ha-267500" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-267500\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-267500\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1
,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-267500\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"172.27.239.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.27.226.61\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"172.27.238.86\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"172.27.233.131\",\"Port\":0,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\"
:false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube1:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations
\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500: (11.5159842s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-267500 logs -n 25: (7.9935741s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT | 28 Apr 24 17:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT | 28 Apr 24 17:24 PDT |
	|         | busybox-fc5497c4f-5xln2              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-5xln2 -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.224.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-wg44s              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-267500 -v=7                | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:25 PDT | 28 Apr 24 17:28 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 17:05:00
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 17:05:00.635889   15128 out.go:291] Setting OutFile to fd 1448 ...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.636883   15128 out.go:304] Setting ErrFile to fd 980...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.660527   15128 out.go:298] Setting JSON to false
	I0428 17:05:00.664060   15128 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6543,"bootTime":1714342556,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 17:05:00.664060   15128 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 17:05:00.669160   15128 out.go:177] * [ha-267500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 17:05:00.673143   15128 notify.go:220] Checking for updates...
	I0428 17:05:00.675298   15128 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:05:00.677914   15128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 17:05:00.680526   15128 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 17:05:00.682871   15128 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 17:05:00.686326   15128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 17:05:00.689521   15128 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 17:05:05.728109   15128 out.go:177] * Using the hyperv driver based on user configuration
	I0428 17:05:05.733726   15128 start.go:297] selected driver: hyperv
	I0428 17:05:05.733726   15128 start.go:901] validating driver "hyperv" against <nil>
	I0428 17:05:05.733888   15128 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 17:05:05.779166   15128 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 17:05:05.780739   15128 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 17:05:05.780739   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:05:05.780739   15128 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0428 17:05:05.780739   15128 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0428 17:05:05.780739   15128 start.go:340] cluster config:
	{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:05:05.781443   15128 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 17:05:05.786272   15128 out.go:177] * Starting "ha-267500" primary control-plane node in "ha-267500" cluster
	I0428 17:05:05.789365   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:05:05.790343   15128 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 17:05:05.790343   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:05:05.790810   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:05:05.791000   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:05:05.791210   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:05:05.791210   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json: {Name:mk9d04dce876aeea74569e2a12d8158542a180a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:360] acquireMachinesLock for ha-267500: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500"
	I0428 17:05:05.793473   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:05:05.793473   15128 start.go:125] createHost starting for "" (driver="hyperv")
	I0428 17:05:05.798458   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:05:05.798458   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:05:05.799075   15128 client.go:168] LocalClient.Create starting
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:05:07.765342   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:05:07.765366   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:07.765483   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:05:09.466609   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:10.942750   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:14.309202   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:05:14.797607   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: Creating VM...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:17.596457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:17.596534   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:17.596629   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:05:17.596740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:19.370912   15128 main.go:141] libmachine: Creating VHD
	I0428 17:05:19.370912   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:05:22.987163   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6323F08D-1941-41F6-AECD-59FDB38477C4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:05:22.987787   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:22.987787   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:05:22.987950   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:05:22.997062   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:05:26.067081   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:26.067395   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:26.067482   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -SizeBytes 20000MB
	I0428 17:05:28.607147   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-267500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:32.186340   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500 -DynamicMemoryEnabled $false
	I0428 17:05:34.304828   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500 -Count 2
	I0428 17:05:36.364288   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:36.365155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:36.365244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\boot2docker.iso'
	I0428 17:05:38.788294   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd'
	I0428 17:05:41.250474   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: Starting VM...
	I0428 17:05:41.251660   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:48.796976   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:48.797051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:49.812421   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:51.911514   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:51.912240   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:51.912333   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:54.389553   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:54.389603   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:55.396985   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:57.532241   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:59.865311   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:59.865354   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:00.867371   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:06.311485   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:10.915736   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:10.916779   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:10.916848   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:12.945722   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:14.977649   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:17.403860   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:17.413822   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:17.413822   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:06:17.548827   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:06:17.549001   15128 buildroot.go:166] provisioning hostname "ha-267500"
	I0428 17:06:17.549001   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:21.963707   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:21.963891   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:21.969614   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:21.970234   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:21.970287   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500 && echo "ha-267500" | sudo tee /etc/hostname
	I0428 17:06:22.125673   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500
	
	I0428 17:06:22.125673   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:24.116148   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:26.498042   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:26.498298   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:26.504621   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:26.505426   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:26.505426   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:06:26.654593   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:06:26.654745   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:06:26.654745   15128 buildroot.go:174] setting up certificates
	I0428 17:06:26.654878   15128 provision.go:84] configureAuth start
	I0428 17:06:26.654974   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:28.643033   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:31.047712   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:33.032385   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:33.033114   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:33.033244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:35.470487   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:35.470551   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:35.470602   15128 provision.go:143] copyHostCerts
	I0428 17:06:35.470602   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:06:35.470602   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:06:35.470602   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:06:35.471409   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:06:35.472302   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:06:35.472302   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:06:35.474368   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:06:35.475508   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:06:35.477084   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500 san=[127.0.0.1 172.27.226.61 ha-267500 localhost minikube]
	I0428 17:06:35.561808   15128 provision.go:177] copyRemoteCerts
	I0428 17:06:35.577487   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:06:35.577487   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:37.564802   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:40.009619   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:06:40.122812   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5453174s)
	I0428 17:06:40.122812   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:06:40.124516   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:06:40.170921   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:06:40.171551   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0428 17:06:40.219603   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:06:40.219603   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:06:40.266084   15128 provision.go:87] duration metric: took 13.6111193s to configureAuth
	I0428 17:06:40.266084   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:06:40.266857   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:06:40.267021   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:42.241914   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:44.637923   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:44.637923   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:44.637923   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:06:44.774113   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:06:44.774113   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:06:44.774113   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:06:44.774650   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:46.777708   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:46.778317   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:46.778401   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:49.187437   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:49.187970   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:49.188102   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:06:49.338418   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:06:49.339201   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:51.331459   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:53.762358   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:53.763024   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:53.763024   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:06:55.964469   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:06:55.964469   15128 machine.go:97] duration metric: took 43.0186778s to provisionDockerMachine
	I0428 17:06:55.964469   15128 client.go:171] duration metric: took 1m50.1652174s to LocalClient.Create
	I0428 17:06:55.964469   15128 start.go:167] duration metric: took 1m50.1658343s to libmachine.API.Create "ha-267500"
	I0428 17:06:55.965115   15128 start.go:293] postStartSetup for "ha-267500" (driver="hyperv")
	I0428 17:06:55.965216   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:06:55.979546   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:06:55.979546   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:57.968316   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:57.969137   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:57.969264   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:00.415449   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:00.415502   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:00.415502   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:00.529139   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5495858s)
	I0428 17:07:00.542143   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:07:00.550032   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:07:00.550213   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:07:00.550570   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:07:00.551284   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:07:00.551284   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:07:00.565509   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:07:00.584743   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:07:00.629457   15128 start.go:296] duration metric: took 4.6642336s for postStartSetup
	I0428 17:07:00.635014   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:02.626728   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:02.627487   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:02.627874   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:05.092989   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:05.093104   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:05.093386   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:07:05.096398   15128 start.go:128] duration metric: took 1m59.3027333s to createHost
	I0428 17:07:05.096398   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:07.065139   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:07.066155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:07.066393   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:09.551453   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:09.552365   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:09.558305   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:09.559011   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:09.559011   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:07:09.695211   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349229.688972111
	
	I0428 17:07:09.695211   15128 fix.go:216] guest clock: 1714349229.688972111
	I0428 17:07:09.695293   15128 fix.go:229] Guest: 2024-04-28 17:07:09.688972111 -0700 PDT Remote: 2024-04-28 17:07:05.096398 -0700 PDT m=+124.563135001 (delta=4.592574111s)
	I0428 17:07:09.695407   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:11.789797   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:11.789847   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:11.789990   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:14.240619   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:14.240815   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:14.240815   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349229
	I0428 17:07:14.381527   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:07:09 UTC 2024
	
	I0428 17:07:14.381591   15128 fix.go:236] clock set: Mon Apr 29 00:07:09 UTC 2024
	 (err=<nil>)
	I0428 17:07:14.381591   15128 start.go:83] releasing machines lock for "ha-267500", held for 2m8.5881066s
	I0428 17:07:14.381888   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:16.379116   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:18.842518   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:07:18.842698   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:18.852567   15128 ssh_runner.go:195] Run: cat /version.json
	I0428 17:07:18.853571   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.911012   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:20.912913   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.913115   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.913211   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:23.515321   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.515423   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.515870   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.545848   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: cat /version.json: (4.8814384s)
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8914872s)
	I0428 17:07:23.747746   15128 ssh_runner.go:195] Run: systemctl --version
	I0428 17:07:23.771255   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 17:07:23.781524   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:07:23.793701   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:07:23.822613   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:07:23.822613   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:23.822613   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:23.866813   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:07:23.903238   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:07:23.922743   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:07:23.934150   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:07:23.963653   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:23.994818   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:07:24.027248   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:24.060207   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:07:24.094263   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:07:24.140407   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:07:24.173847   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:07:24.204942   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:07:24.241686   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:07:24.271540   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:24.469049   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:07:24.498779   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:24.511314   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:07:24.547731   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.585442   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:07:24.632453   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.665555   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.704256   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:07:24.766295   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.792824   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:24.839067   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:07:24.857950   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:07:24.877113   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:07:24.928235   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:07:25.145493   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:07:25.342459   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:07:25.342632   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:07:25.392872   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:25.606530   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:28.159251   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5517925s)
	I0428 17:07:28.171034   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0428 17:07:28.211210   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.251460   15128 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0428 17:07:28.457673   15128 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0428 17:07:28.655447   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:28.858401   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0428 17:07:28.905418   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.943568   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:29.150079   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0428 17:07:29.264527   15128 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0428 17:07:29.277774   15128 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0428 17:07:29.287734   15128 start.go:562] Will wait 60s for crictl version
	I0428 17:07:29.298726   15128 ssh_runner.go:195] Run: which crictl
	I0428 17:07:29.316760   15128 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 17:07:29.366950   15128 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0428 17:07:29.376977   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.418646   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.453698   15128 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0428 17:07:29.453698   15128 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d9:46:b5 Flags:up|broadcast|multicast|running}
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: fe80::ddcc:c7f1:f829:ae2f/64
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: 172.27.224.1/20
	I0428 17:07:29.473489   15128 ssh_runner.go:195] Run: grep 172.27.224.1	host.minikube.internal$ /etc/hosts
	I0428 17:07:29.479885   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:29.514603   15128 kubeadm.go:877] updating cluster {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 17:07:29.514603   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:07:29.523620   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:29.550369   15128 docker.go:685] Got preloaded images: 
	I0428 17:07:29.550483   15128 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0428 17:07:29.562702   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:29.593952   15128 ssh_runner.go:195] Run: which lz4
	I0428 17:07:29.600117   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0428 17:07:29.613555   15128 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0428 17:07:29.619890   15128 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0428 17:07:29.619890   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0428 17:07:31.519069   15128 docker.go:649] duration metric: took 1.9189486s to copy over tarball
	I0428 17:07:31.533069   15128 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0428 17:07:40.472773   15128 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9396898s)
	I0428 17:07:40.472925   15128 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0428 17:07:40.541351   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:40.567273   15128 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0428 17:07:40.619221   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:40.837523   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:44.196770   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3592418s)
	I0428 17:07:44.207767   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:44.237423   15128 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0428 17:07:44.237484   15128 cache_images.go:84] Images are preloaded, skipping loading
	I0428 17:07:44.237484   15128 kubeadm.go:928] updating node { 172.27.226.61 8443 v1.30.0 docker true true} ...
	I0428 17:07:44.237484   15128 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-267500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.226.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 17:07:44.246763   15128 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0428 17:07:44.282127   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:07:44.282216   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:07:44.282216   15128 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 17:07:44.282351   15128 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.226.61 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-267500 NodeName:ha-267500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.226.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.226.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 17:07:44.282455   15128 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.226.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-267500"
	  kubeletExtraArgs:
	    node-ip: 172.27.226.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.226.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 17:07:44.282455   15128 kube-vip.go:111] generating kube-vip config ...
	I0428 17:07:44.297487   15128 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0428 17:07:44.321501   15128 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0428 17:07:44.322489   15128 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0428 17:07:44.337281   15128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 17:07:44.356448   15128 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 17:07:44.368828   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0428 17:07:44.388733   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0428 17:07:44.419285   15128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 17:07:44.454529   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0428 17:07:44.492910   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0428 17:07:44.535119   15128 ssh_runner.go:195] Run: grep 172.27.239.254	control-plane.minikube.internal$ /etc/hosts
	I0428 17:07:44.544353   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:44.584071   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:44.784658   15128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 17:07:44.813138   15128 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500 for IP: 172.27.226.61
	I0428 17:07:44.813138   15128 certs.go:194] generating shared ca certs ...
	I0428 17:07:44.813138   15128 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:44.814022   15128 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0428 17:07:44.814402   15128 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0428 17:07:44.814630   15128 certs.go:256] generating profile certs ...
	I0428 17:07:44.815376   15128 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key
	I0428 17:07:44.815452   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt with IP's: []
	I0428 17:07:45.207682   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt ...
	I0428 17:07:45.207682   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt: {Name:mkad69168dad75f83e0efa34e0b67056be851f25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.209661   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key ...
	I0428 17:07:45.209661   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key: {Name:mkb880ba41d02f89477ac0bc036a3238bb214c19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.210642   15128 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3
	I0428 17:07:45.211691   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.226.61 172.27.239.254]
	I0428 17:07:45.272240   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 ...
	I0428 17:07:45.272240   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3: {Name:mk99fb8942eac42f7e59971118a5e983aa693542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.273362   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 ...
	I0428 17:07:45.273362   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3: {Name:mkdcebf54b68db40ea28398d3bc9d7030e2380c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.274711   15128 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt
	I0428 17:07:45.286842   15128 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key
	I0428 17:07:45.287930   15128 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key
	I0428 17:07:45.288916   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt with IP's: []
	I0428 17:07:45.392345   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt ...
	I0428 17:07:45.392345   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt: {Name:mk043c6e778c0a46cac3b2815bc508f265aae077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.394630   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key ...
	I0428 17:07:45.394630   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key: {Name:mk9cbeba2bc7745cd3561dc98b61ab1be7e0e2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.395971   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 17:07:45.396701   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 17:07:45.396840   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 17:07:45.396982   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 17:07:45.397123   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 17:07:45.404414   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 17:07:45.405312   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem (1338 bytes)
	W0428 17:07:45.405975   15128 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228_empty.pem, impossibly tiny 0 bytes
	I0428 17:07:45.406015   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0428 17:07:45.406268   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0428 17:07:45.406623   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0428 17:07:45.406886   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0428 17:07:45.407157   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem (1708 bytes)
	I0428 17:07:45.407157   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /usr/share/ca-certificates/32282.pem
	I0428 17:07:45.407872   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:45.408049   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem -> /usr/share/ca-certificates/3228.pem
	I0428 17:07:45.408290   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 17:07:45.465598   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0428 17:07:45.514624   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 17:07:45.563309   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 17:07:45.610689   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0428 17:07:45.668205   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0428 17:07:45.709224   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 17:07:45.760227   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0428 17:07:45.808948   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /usr/share/ca-certificates/32282.pem (1708 bytes)
	I0428 17:07:45.867908   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 17:07:45.915616   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem --> /usr/share/ca-certificates/3228.pem (1338 bytes)
	I0428 17:07:45.964791   15128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 17:07:46.023214   15128 ssh_runner.go:195] Run: openssl version
	I0428 17:07:46.048823   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32282.pem && ln -fs /usr/share/ca-certificates/32282.pem /etc/ssl/certs/32282.pem"
	I0428 17:07:46.088573   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.097176   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.109096   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.132635   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32282.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 17:07:46.166258   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 17:07:46.204585   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.212881   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.228291   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.251359   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 17:07:46.286250   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3228.pem && ln -fs /usr/share/ca-certificates/3228.pem /etc/ssl/certs/3228.pem"
	I0428 17:07:46.330437   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.337213   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.348616   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.369695   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3228.pem /etc/ssl/certs/51391683.0"
	I0428 17:07:46.404629   15128 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 17:07:46.416103   15128 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 17:07:46.416103   15128 kubeadm.go:391] StartCluster: {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:07:46.427776   15128 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 17:07:46.462126   15128 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 17:07:46.492998   15128 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 17:07:46.525017   15128 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 17:07:46.543389   15128 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 17:07:46.543449   15128 kubeadm.go:156] found existing configuration files:
	
	I0428 17:07:46.559558   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 17:07:46.576906   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 17:07:46.591547   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 17:07:46.622617   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 17:07:46.643274   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 17:07:46.657479   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 17:07:46.687575   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.704724   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 17:07:46.717169   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.749254   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 17:07:46.767247   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 17:07:46.779268   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 17:07:46.798138   15128 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0428 17:07:47.295492   15128 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0428 17:08:03.206037   15128 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0428 17:08:03.206217   15128 kubeadm.go:309] [preflight] Running pre-flight checks
	I0428 17:08:03.206547   15128 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0428 17:08:03.206720   15128 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0428 17:08:03.207017   15128 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0428 17:08:03.207166   15128 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 17:08:03.211078   15128 out.go:204]   - Generating certificates and keys ...
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0428 17:08:03.212047   15128 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0428 17:08:03.212253   15128 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0428 17:08:03.212452   15128 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.213396   15128 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 17:08:03.214403   15128 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 17:08:03.214647   15128 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 17:08:03.217496   15128 out.go:204]   - Booting up control plane ...
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 17:08:03.218523   15128 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 17:08:03.218673   15128 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0428 17:08:03.218845   15128 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002004724s
	I0428 17:08:03.219380   15128 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0428 17:08:03.219512   15128 kubeadm.go:309] [api-check] The API server is healthy after 9.018382318s
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0428 17:08:03.219547   15128 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0428 17:08:03.219547   15128 kubeadm.go:309] [mark-control-plane] Marking the node ha-267500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0428 17:08:03.219547   15128 kubeadm.go:309] [bootstrap-token] Using token: o2t0fz.gqoxv8rhmbtgnafl
	I0428 17:08:03.222077   15128 out.go:204]   - Configuring RBAC rules ...
	I0428 17:08:03.223255   15128 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0428 17:08:03.223390   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0428 17:08:03.223700   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0428 17:08:03.224022   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0428 17:08:03.224356   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0428 17:08:03.224673   15128 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0428 17:08:03.224822   15128 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0428 17:08:03.224822   15128 kubeadm.go:309] 
	I0428 17:08:03.224822   15128 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0428 17:08:03.225393   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.226084   15128 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0428 17:08:03.226084   15128 kubeadm.go:309] 
	I0428 17:08:03.226252   15128 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0428 17:08:03.226279   15128 kubeadm.go:309] 
	I0428 17:08:03.226368   15128 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0428 17:08:03.226368   15128 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0428 17:08:03.226368   15128 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0428 17:08:03.226368   15128 kubeadm.go:309] 
	I0428 17:08:03.226941   15128 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0428 17:08:03.227102   15128 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0428 17:08:03.227102   15128 kubeadm.go:309] 
	I0428 17:08:03.227370   15128 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--control-plane 
	I0428 17:08:03.227509   15128 kubeadm.go:309] 
	I0428 17:08:03.227814   15128 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0428 17:08:03.227814   15128 kubeadm.go:309] 
	I0428 17:08:03.228020   15128 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.228020   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c 
	I0428 17:08:03.228020   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:08:03.228020   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:08:03.230920   15128 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0428 17:08:03.245586   15128 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0428 17:08:03.254991   15128 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0428 17:08:03.255049   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0428 17:08:03.307618   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0428 17:08:04.087321   15128 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 17:08:04.101185   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-267500 minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=ha-267500 minikube.k8s.io/primary=true
	I0428 17:08:04.110392   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.127454   15128 ops.go:34] apiserver oom_adj: -16
	I0428 17:08:04.338961   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.853452   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.339051   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.843300   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.345394   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.842588   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.347466   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.845426   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.343954   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.844666   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.346016   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.847106   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.346157   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.852073   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.350599   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.851124   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.339498   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.839469   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.341674   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.844363   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.340478   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.840892   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.351020   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.542789   15128 kubeadm.go:1107] duration metric: took 11.4553488s to wait for elevateKubeSystemPrivileges
	W0428 17:08:15.542884   15128 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0428 17:08:15.542948   15128 kubeadm.go:393] duration metric: took 29.1267984s to StartCluster
	I0428 17:08:15.542948   15128 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.543147   15128 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:15.545087   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.546714   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0428 17:08:15.546792   15128 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:15.546862   15128 start.go:240] waiting for startup goroutines ...
	I0428 17:08:15.546921   15128 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0428 17:08:15.547043   15128 addons.go:69] Setting storage-provisioner=true in profile "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:234] Setting addon storage-provisioner=true in "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:69] Setting default-storageclass=true in profile "ha-267500"
	I0428 17:08:15.547186   15128 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-267500"
	I0428 17:08:15.547186   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:15.547418   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.760123   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0428 17:08:16.117515   15128 start.go:946] {"host.minikube.internal": 172.27.224.1} host record injected into CoreDNS's ConfigMap
	I0428 17:08:17.727218   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.731020   15128 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 17:08:17.728718   15128 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:17.731866   15128 kapi.go:59] client config for ha-267500: &rest.Config{Host:"https://172.27.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 17:08:17.733765   15128 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:17.733849   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0428 17:08:17.733849   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:17.735131   15128 cert_rotation.go:137] Starting client certificate rotation controller
	I0428 17:08:17.735131   15128 addons.go:234] Setting addon default-storageclass=true in "ha-267500"
	I0428 17:08:17.735756   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:17.736495   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.022150   15128 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:20.022150   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.024648   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.176019   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:22.176993   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.177104   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.649653   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:22.838833   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:23.942043   15128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1032083s)
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:24.736869   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:24.878922   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:25.036824   15128 round_trippers.go:463] GET https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0428 17:08:25.036824   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.036824   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.036824   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.047850   15128 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0428 17:08:25.050270   15128 round_trippers.go:463] PUT https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0428 17:08:25.050270   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Content-Type: application/json
	I0428 17:08:25.050270   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.054895   15128 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 17:08:25.058644   15128 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0428 17:08:25.062323   15128 addons.go:505] duration metric: took 9.5154456s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0428 17:08:25.062323   15128 start.go:245] waiting for cluster config update ...
	I0428 17:08:25.062323   15128 start.go:254] writing updated cluster config ...
	I0428 17:08:25.064855   15128 out.go:177] 
	I0428 17:08:25.074876   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:25.074876   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.081680   15128 out.go:177] * Starting "ha-267500-m02" control-plane node in "ha-267500" cluster
	I0428 17:08:25.084831   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:08:25.084949   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:08:25.085245   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:08:25.085467   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:08:25.085668   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.089909   15128 start.go:360] acquireMachinesLock for ha-267500-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:08:25.089909   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500-m02"
	I0428 17:08:25.089909   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:25.089909   15128 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0428 17:08:25.092669   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:08:25.092669   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:08:25.092669   15128 client.go:168] LocalClient.Create starting
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:08:26.932082   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:08:26.932249   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:26.932469   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:08:28.625007   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:08:28.625741   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:28.625836   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:30.145128   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:30.145193   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:30.145352   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:33.641047   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:33.641341   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:33.643919   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:08:34.107074   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:08:34.283136   15128 main.go:141] libmachine: Creating VM...
	I0428 17:08:34.284168   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:37.085226   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:37.085497   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:38.799740   15128 main.go:141] libmachine: Creating VHD
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1C4811B2-F108-4C17-8C85-240087500FFB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:08:42.443176   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:08:45.530814   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -SizeBytes 20000MB
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:08:51.507051   15128 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-267500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:08:51.507121   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:51.507184   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500-m02 -DynamicMemoryEnabled $false
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:53.623959   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500-m02 -Count 2
	I0428 17:08:55.746706   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:55.747282   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:55.747376   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\boot2docker.iso'
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:58.231298   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd'
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: Starting VM...
	I0428 17:09:00.819246   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500-m02
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:08.535107   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:08.535676   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:09.540110   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:11.730252   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:11.730767   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:11.730896   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:14.267320   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:14.267920   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:15.278102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:17.429662   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:19.872667   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:19.873239   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:20.874059   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:23.049283   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:25.483021   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:25.483840   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:26.497330   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:28.593193   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:31.092830   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:33.155893   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:33.156190   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:33.156190   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:09:33.156343   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:37.708958   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:37.709094   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:37.715262   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:37.715453   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:37.715453   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:09:37.838307   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:09:37.838307   15128 buildroot.go:166] provisioning hostname "ha-267500-m02"
	I0428 17:09:37.838307   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:39.845337   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:39.845507   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:39.845582   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:42.372033   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:42.372654   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:42.379934   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:42.380083   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:42.380083   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500-m02 && echo "ha-267500-m02" | sudo tee /etc/hostname
	I0428 17:09:42.534583   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500-m02
	
	I0428 17:09:42.534727   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:44.674240   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:47.257595   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:47.258189   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:47.258189   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:09:47.404787   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:09:47.404787   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:09:47.404787   15128 buildroot.go:174] setting up certificates
	I0428 17:09:47.404787   15128 provision.go:84] configureAuth start
	I0428 17:09:47.404787   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:51.875853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:53.926853   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:53.927030   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:53.927102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:56.411706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:56.412682   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:56.412682   15128 provision.go:143] copyHostCerts
	I0428 17:09:56.412881   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:09:56.413201   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:09:56.413201   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:09:56.413699   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:09:56.414916   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:09:56.415172   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:09:56.417043   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:09:56.417043   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:09:56.417043   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:09:56.417691   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:09:56.418448   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500-m02 san=[127.0.0.1 172.27.238.86 ha-267500-m02 localhost minikube]
	I0428 17:09:56.698158   15128 provision.go:177] copyRemoteCerts
	I0428 17:09:56.713232   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:09:56.713232   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:58.727438   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:58.728437   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:58.728572   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:01.200219   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:01.303703   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5904121s)
	I0428 17:10:01.303703   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:10:01.304216   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:10:01.351115   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:10:01.351613   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0428 17:10:01.399941   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:10:01.400279   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:10:01.447643   15128 provision.go:87] duration metric: took 14.0428334s to configureAuth
	I0428 17:10:01.447643   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:10:01.448198   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:10:01.448388   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:03.470041   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:05.925618   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:05.926194   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:05.926194   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:10:06.056503   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:10:06.056605   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:10:06.056795   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:10:06.056855   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:08.084596   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:10.593844   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:10.594210   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:10.600708   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:10.601470   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:10.601470   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.226.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:10:10.751881   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.226.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:10:10.751947   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:12.904363   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:15.479691   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:15.479915   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:15.486849   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:15.487030   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:15.487030   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:10:17.663081   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:10:17.663081   15128 machine.go:97] duration metric: took 44.506824s to provisionDockerMachine
	I0428 17:10:17.663081   15128 client.go:171] duration metric: took 1m52.570239s to LocalClient.Create
	I0428 17:10:17.663081   15128 start.go:167] duration metric: took 1m52.570239s to libmachine.API.Create "ha-267500"
	I0428 17:10:17.663081   15128 start.go:293] postStartSetup for "ha-267500-m02" (driver="hyperv")
	I0428 17:10:17.663081   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:10:17.677002   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:10:17.677002   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:19.758853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:22.318985   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:22.423330   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7463207s)
	I0428 17:10:22.436053   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:10:22.443505   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:10:22.443505   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:10:22.444052   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:10:22.445207   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:10:22.445207   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:10:22.458722   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:10:22.477786   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:10:22.526087   15128 start.go:296] duration metric: took 4.8629979s for postStartSetup
	I0428 17:10:22.528901   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:27.084100   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:10:27.086385   15128 start.go:128] duration metric: took 2m1.9962875s to createHost
	I0428 17:10:27.086385   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:29.131174   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:31.572065   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:31.572369   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:31.578077   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:31.578656   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:31.578656   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349431.710726684
	
	I0428 17:10:31.707789   15128 fix.go:216] guest clock: 1714349431.710726684
	I0428 17:10:31.707789   15128 fix.go:229] Guest: 2024-04-28 17:10:31.710726684 -0700 PDT Remote: 2024-04-28 17:10:27.0863856 -0700 PDT m=+326.552805801 (delta=4.624341084s)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:36.218864   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:36.219399   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:36.219663   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349431
	I0428 17:10:36.353520   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:10:31 UTC 2024
	
	I0428 17:10:36.353602   15128 fix.go:236] clock set: Mon Apr 29 00:10:31 UTC 2024
	 (err=<nil>)
	I0428 17:10:36.353602   15128 start.go:83] releasing machines lock for "ha-267500-m02", held for 2m11.26349s
	I0428 17:10:36.353795   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:38.401891   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:40.883767   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:40.883929   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:40.887007   15128 out.go:177] * Found network options:
	I0428 17:10:40.889514   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.892316   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.894427   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.897007   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 17:10:40.898142   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.900035   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:10:40.900035   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:40.912127   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 17:10:40.913152   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:43.021173   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.602076   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.622078   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.622258   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.622506   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.694842   15128 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7816825s)
	W0428 17:10:45.694980   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:10:45.707857   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:10:45.811368   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:10:45.811368   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:45.811368   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.911325s)
	I0428 17:10:45.811813   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:45.869634   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:10:45.905032   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:10:45.930324   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:10:45.946027   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:10:45.978279   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.013710   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:10:46.061695   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.102008   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:10:46.135573   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:10:46.171642   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:10:46.204807   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:10:46.239021   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:10:46.271655   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:10:46.306942   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:46.514038   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:10:46.544941   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:46.560491   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:10:46.605547   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.654104   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:10:46.708544   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.748048   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.784762   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:10:46.849187   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.873497   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:46.927545   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:10:46.944545   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:10:46.962213   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:10:47.010730   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:10:47.237397   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:10:47.429784   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:10:47.429870   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:10:47.474822   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:47.662962   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:11:48.797471   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1344114s)
	I0428 17:11:48.811984   15128 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0428 17:11:48.846867   15128 out.go:177] 
	W0428 17:11:48.851004   15128 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 00:10:16 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.119534579Z" level=info msg="Starting up"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.120740894Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.121661806Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.164120251Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189883081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189945482Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190009182Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190026683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190220685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190263486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190520589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190669591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190716191Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190728492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190839193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.191192898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194247737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194367638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194558841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194663742Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194795944Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195368451Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195462552Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220446573Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220530874Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220815977Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220940379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220961379Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221231583Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221822990Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222033793Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222143394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222181895Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222200695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222229595Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222251396Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222320897Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222367097Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222383497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222398798Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222414398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222438198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222458898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222474399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222508799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222524499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222540899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222555500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222572000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222588200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222612300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222628301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222643801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222659801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222679401Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222703802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222745302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222782703Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222911604Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222975905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222992605Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223005105Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223156807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223197908Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223212708Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229340687Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229588390Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.230467901Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.231131810Z" level=info msg="containerd successfully booted in 0.070317s"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.196765446Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.225741894Z" level=info msg="Loading containers: start."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.520224287Z" level=info msg="Loading containers: done."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.548826467Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.549157372Z" level=info msg="Daemon has completed initialization"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663745997Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663852398Z" level=info msg="API listen on [::]:2376"
	Apr 29 00:10:17 ha-267500-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 00:10:47 ha-267500-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.694032846Z" level=info msg="Processing signal 'terminated'"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696514258Z" level=info msg="Daemon shutdown complete"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696708859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696755859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696775959Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:48 ha-267500-m02 dockerd[1016]: time="2024-04-29T00:10:48.770678285Z" level=info msg="Starting up"
	Apr 29 00:11:48 ha-267500-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0428 17:11:48.851004   15128 out.go:239] * 
	W0428 17:11:48.852842   15128 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 17:11:48.855427   15128 out.go:177] 
	
	
	==> Docker <==
	Apr 29 00:24:11 ha-267500 dockerd[1316]: 2024/04/29 00:24:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:11 ha-267500 dockerd[1316]: 2024/04/29 00:24:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:12 ha-267500 dockerd[1316]: 2024/04/29 00:24:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:55 ha-267500 dockerd[1316]: 2024/04/29 00:24:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:15 ha-267500 dockerd[1316]: 2024/04/29 00:29:15 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:15 ha-267500 dockerd[1316]: 2024/04/29 00:29:15 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8d1eabc40263       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   9e5d506c62d64       busybox-fc5497c4f-5xln2
	863860b786b42       cbb01a7bd410d                                                                                         21 minutes ago      Running             coredns                   0                   c1f590ad490fe       coredns-7db6d8ff4d-p7tjz
	f85260746d557       cbb01a7bd410d                                                                                         21 minutes ago      Running             coredns                   0                   586f91a6b0d3d       coredns-7db6d8ff4d-2d6ct
	f23ff280b691c       6e38f40d628db                                                                                         21 minutes ago      Running             storage-provisioner       0                   4f7c6837c24bd       storage-provisioner
	31e97721c439f       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              21 minutes ago      Running             kindnet-cni               0                   9a810f16fad2b       kindnet-6pr2b
	b505176bff8dd       a0bf559e280cf                                                                                         21 minutes ago      Running             kube-proxy                0                   f041e2ebf6955       kube-proxy-59kz7
	e8de8cc5d0941       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     22 minutes ago      Running             kube-vip                  0                   5e6adedaca2d1       kube-vip-ha-267500
	1bb77467f58fc       3861cfcd7c04c                                                                                         22 minutes ago      Running             etcd                      0                   bd2f63e7ff884       etcd-ha-267500
	e3f1a76ec8d43       c42f13656d0b2                                                                                         22 minutes ago      Running             kube-apiserver            0                   1aac39df0e147       kube-apiserver-ha-267500
	8e1e8e3ae83a4       259c8277fcbbc                                                                                         22 minutes ago      Running             kube-scheduler            0                   59e9e09e1fe2e       kube-scheduler-ha-267500
	988ba6e93dbd2       c7aad43836fa5                                                                                         22 minutes ago      Running             kube-controller-manager   0                   b062edd237fa4       kube-controller-manager-ha-267500
	
	
	==> coredns [863860b786b4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56042 - 38920 "HINFO IN 6310058863699759000.886894576477842994. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026858243s
	[INFO] 10.244.0.4:52183 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.109912239s
	[INFO] 10.244.0.4:36966 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.019781143s
	[INFO] 10.244.0.4:50436 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.124688347s
	[INFO] 10.244.0.4:39307 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231401s
	[INFO] 10.244.0.4:48774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000438101s
	[INFO] 10.244.0.4:55657 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001919s
	[INFO] 10.244.0.4:39536 - 5 "PTR IN 1.224.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000243301s
	
	
	==> coredns [f85260746d55] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52661 - 10332 "HINFO IN 6890724632724915343.2842102422429648823. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049972505s
	[INFO] 10.244.0.4:36002 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189801s
	[INFO] 10.244.0.4:39517 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002061s
	[INFO] 10.244.0.4:58443 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.132665688s
	[INFO] 10.244.0.4:58628 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000428701s
	[INFO] 10.244.0.4:35412 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002027s
	[INFO] 10.244.0.4:55943 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.02269265s
	[INFO] 10.244.0.4:41245 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000423501s
	[INFO] 10.244.0.4:57855 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168501s
	[INFO] 10.244.0.4:59251 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000973s
	[INFO] 10.244.0.4:49224 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000193501s
	[INFO] 10.244.0.4:39630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002705s
	[INFO] 10.244.0.4:33915 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000299901s
	[INFO] 10.244.0.4:44933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000954s
	
	
	==> describe nodes <==
	Name:               ha-267500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-267500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-267500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:08:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-267500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:30:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:27:57 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:27:57 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:27:57 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:27:57 +0000   Mon, 29 Apr 2024 00:08:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.226.61
	  Hostname:    ha-267500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 077cacd754b64c3dad0beeef28749850
	  System UUID:                961ce819-6c1b-c24a-99df-3205dca32605
	  Boot ID:                    bb08693c-1f82-4307-a58c-bdcce00f2d7a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5xln2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-7db6d8ff4d-2d6ct             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 coredns-7db6d8ff4d-p7tjz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-ha-267500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-6pr2b                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	  kube-system                 kube-apiserver-ha-267500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-ha-267500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-59kz7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-ha-267500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-vip-ha-267500                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21m   kube-proxy       
	  Normal  NodeHasSufficientMemory  22m   kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  Starting                 22m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m   kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m   kubelet          Node ha-267500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m   kubelet          Node ha-267500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m   node-controller  Node ha-267500 event: Registered Node ha-267500 in Controller
	  Normal  NodeReady                21m   kubelet          Node ha-267500 status is now: NodeReady
	
	
	Name:               ha-267500-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-267500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-267500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_28T17_28_02_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:28:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-267500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:30:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.233.131
	  Hostname:    ha-267500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0562d38ef374b73969ab15fed947e11
	  System UUID:                c94a104a-b670-854e-ac89-f41b3533cc69
	  Boot ID:                    bca10429-bddd-4547-8fb0-c50d93740969
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jxx6x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-mspbr              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m3s
	  kube-system                 kube-proxy-jcph5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 113s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m3s (x2 over 2m3s)  kubelet          Node ha-267500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x2 over 2m3s)  kubelet          Node ha-267500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x2 over 2m3s)  kubelet          Node ha-267500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m                   node-controller  Node ha-267500-m03 event: Registered Node ha-267500-m03 in Controller
	  Normal  NodeReady                106s                 kubelet          Node ha-267500-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr29 00:06] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.760915] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.419480] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.183676] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[Apr29 00:07] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.112445] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.557599] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.220083] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.252325] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.857578] systemd-fstab-generator[1178]: Ignoring "noauto" option for root device
	[  +0.206645] systemd-fstab-generator[1190]: Ignoring "noauto" option for root device
	[  +0.195057] systemd-fstab-generator[1202]: Ignoring "noauto" option for root device
	[  +0.281554] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[ +11.671296] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.127733] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.851029] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +6.965698] systemd-fstab-generator[1723]: Ignoring "noauto" option for root device
	[  +0.101314] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.149606] kauditd_printk_skb: 67 callbacks suppressed
	[Apr29 00:08] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[ +14.798165] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.098725] kauditd_printk_skb: 29 callbacks suppressed
	[Apr29 00:12] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [1bb77467f58f] <==
	{"level":"warn","ts":"2024-04-29T00:27:56.3113Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:27:55.726454Z","time spent":"584.410622ms","remote":"127.0.0.1:52796","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":420,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:2570 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:370 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >"}
	{"level":"info","ts":"2024-04-29T00:27:56.685502Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2039}
	{"level":"info","ts":"2024-04-29T00:27:56.697671Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2039,"took":"11.701828ms","hash":3710382387,"current-db-size-bytes":2490368,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1806336,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-29T00:27:56.697806Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3710382387,"revision":2039,"compact-revision":1501}
	{"level":"warn","ts":"2024-04-29T00:28:01.015001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.747946ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11906267438321686993 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2563 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911183 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T00:28:01.015171Z","caller":"traceutil/trace.go:171","msg":"trace[1066403778] linearizableReadLoop","detail":"{readStateIndex:2839; appliedIndex:2838; }","duration":"166.002504ms","start":"2024-04-29T00:28:00.849156Z","end":"2024-04-29T00:28:01.015158Z","steps":["trace[1066403778] 'read index received'  (duration: 64.927058ms)","trace[1066403778] 'applied index is now lower than readState.Index'  (duration: 101.074346ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T00:28:01.015567Z","caller":"traceutil/trace.go:171","msg":"trace[1444860087] transaction","detail":"{read_only:false; response_revision:2582; number_of_response:1; }","duration":"309.347954ms","start":"2024-04-29T00:28:00.706202Z","end":"2024-04-29T00:28:01.01555Z","steps":["trace[1444860087] 'process raft request'  (duration: 207.946307ms)","trace[1444860087] 'compare'  (duration: 100.659345ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:01.015577Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.380105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/kube-system/bootstrap-token-46antb\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:01.015843Z","caller":"traceutil/trace.go:171","msg":"trace[1574012410] range","detail":"{range_begin:/registry/secrets/kube-system/bootstrap-token-46antb; range_end:; response_count:0; response_revision:2582; }","duration":"166.706906ms","start":"2024-04-29T00:28:00.849128Z","end":"2024-04-29T00:28:01.015834Z","steps":["trace[1574012410] 'agreement among raft nodes before linearized reading'  (duration: 166.065204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:01.015715Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:00.706185Z","time spent":"309.436654ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2563 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911183 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >"}
	{"level":"info","ts":"2024-04-29T00:28:01.13236Z","caller":"traceutil/trace.go:171","msg":"trace[848518735] transaction","detail":"{read_only:false; response_revision:2583; number_of_response:1; }","duration":"106.51056ms","start":"2024-04-29T00:28:01.02575Z","end":"2024-04-29T00:28:01.132261Z","steps":["trace[848518735] 'process raft request'  (duration: 100.002844ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:28:10.719709Z","caller":"traceutil/trace.go:171","msg":"trace[688876790] transaction","detail":"{read_only:false; response_revision:2633; number_of_response:1; }","duration":"131.602022ms","start":"2024-04-29T00:28:10.588085Z","end":"2024-04-29T00:28:10.719687Z","steps":["trace[688876790] 'process raft request'  (duration: 131.335422ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:11.057116Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.908169ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11906267438321687140 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:253b8f272df6da63>","response":"size:41"}
	{"level":"info","ts":"2024-04-29T00:28:11.057309Z","caller":"traceutil/trace.go:171","msg":"trace[730869850] linearizableReadLoop","detail":"{readStateIndex:2894; appliedIndex:2893; }","duration":"310.80146ms","start":"2024-04-29T00:28:10.746493Z","end":"2024-04-29T00:28:11.057294Z","steps":["trace[730869850] 'read index received'  (duration: 118.63939ms)","trace[730869850] 'applied index is now lower than readState.Index'  (duration: 192.16047ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.057392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"310.91436ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:11.057434Z","caller":"traceutil/trace.go:171","msg":"trace[965932074] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2633; }","duration":"310.98056ms","start":"2024-04-29T00:28:10.746443Z","end":"2024-04-29T00:28:11.057424Z","steps":["trace[965932074] 'agreement among raft nodes before linearized reading'  (duration: 310.91126ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:11.057458Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:10.746429Z","time spent":"311.02236ms","remote":"127.0.0.1:52498","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-04-29T00:28:11.057874Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:10.721431Z","time spent":"336.441923ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-04-29T00:28:11.410781Z","caller":"traceutil/trace.go:171","msg":"trace[921900368] linearizableReadLoop","detail":"{readStateIndex:2895; appliedIndex:2894; }","duration":"284.369895ms","start":"2024-04-29T00:28:11.126395Z","end":"2024-04-29T00:28:11.410765Z","steps":["trace[921900368] 'read index received'  (duration: 193.861274ms)","trace[921900368] 'applied index is now lower than readState.Index'  (duration: 90.507421ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.411124Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.711696ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-267500-m03\" ","response":"range_response_count:1 size:2813"}
	{"level":"info","ts":"2024-04-29T00:28:11.41123Z","caller":"traceutil/trace.go:171","msg":"trace[1500780481] range","detail":"{range_begin:/registry/minions/ha-267500-m03; range_end:; response_count:1; response_revision:2634; }","duration":"284.831096ms","start":"2024-04-29T00:28:11.126391Z","end":"2024-04-29T00:28:11.411222Z","steps":["trace[1500780481] 'agreement among raft nodes before linearized reading'  (duration: 284.474795ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:28:11.411809Z","caller":"traceutil/trace.go:171","msg":"trace[1062724437] transaction","detail":"{read_only:false; response_revision:2634; number_of_response:1; }","duration":"351.77576ms","start":"2024-04-29T00:28:11.059046Z","end":"2024-04-29T00:28:11.410821Z","steps":["trace[1062724437] 'process raft request'  (duration: 261.137839ms)","trace[1062724437] 'compare'  (duration: 90.397121ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.412239Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:11.059032Z","time spent":"352.927263ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2582 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911331 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >"}
	{"level":"warn","ts":"2024-04-29T00:28:16.429655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.224744ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:16.429988Z","caller":"traceutil/trace.go:171","msg":"trace[1266991256] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2651; }","duration":"181.540745ms","start":"2024-04-29T00:28:16.248407Z","end":"2024-04-29T00:28:16.429948Z","steps":["trace[1266991256] 'range keys from in-memory index tree'  (duration: 181.210444ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:30:05 up 24 min,  0 users,  load average: 0.19, 0.35, 0.34
	Linux ha-267500 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [31e97721c439] <==
	I0429 00:28:56.472045       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:29:06.487195       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:29:06.487288       1 main.go:227] handling current node
	I0429 00:29:06.487301       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:29:06.487309       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:29:16.494505       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:29:16.494604       1 main.go:227] handling current node
	I0429 00:29:16.494618       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:29:16.494626       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:29:26.505397       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:29:26.505536       1 main.go:227] handling current node
	I0429 00:29:26.505614       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:29:26.505696       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:29:36.517155       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:29:36.517234       1 main.go:227] handling current node
	I0429 00:29:36.517247       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:29:36.517255       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:29:46.529718       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:29:46.529830       1 main.go:227] handling current node
	I0429 00:29:46.529844       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:29:46.529864       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:29:56.535112       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:29:56.535203       1 main.go:227] handling current node
	I0429 00:29:56.535217       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:29:56.535225       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e3f1a76ec8d4] <==
	I0429 00:08:00.626826       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 00:08:01.319490       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0429 00:08:02.484116       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0429 00:08:02.484213       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0429 00:08:02.484272       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.5µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0429 00:08:02.485404       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0429 00:08:02.486881       1 timeout.go:142] post-timeout activity - time-elapsed: 2.861712ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0429 00:08:02.642721       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 00:08:02.684736       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 00:08:02.712741       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 00:08:15.229730       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 00:08:15.308254       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 00:23:49.502033       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49293: use of closed network connection
	E0429 00:23:50.824153       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49301: use of closed network connection
	E0429 00:23:51.986308       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49309: use of closed network connection
	E0429 00:24:25.826543       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49329: use of closed network connection
	E0429 00:24:36.281538       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49332: use of closed network connection
	I0429 00:27:56.312329       1 trace.go:236] Trace[1132022318]: "Update" accept:application/json, */*,audit-id:b430ffa2-60e5-4395-a53d-a8ebd619d367,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (29-Apr-2024 00:27:55.724) (total time: 587ms):
	Trace[1132022318]: ["GuaranteedUpdate etcd3" audit-id:b430ffa2-60e5-4395-a53d-a8ebd619d367,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 587ms (00:27:55.725)
	Trace[1132022318]:  ---"Txn call completed" 586ms (00:27:56.312)]
	Trace[1132022318]: [587.55203ms] [587.55203ms] END
	I0429 00:28:11.413223       1 trace.go:236] Trace[768089845]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.27.226.61,type:*v1.Endpoints,resource:apiServerIPInfo (29-Apr-2024 00:28:10.678) (total time: 734ms):
	Trace[768089845]: ---"Transaction prepared" 338ms (00:28:11.058)
	Trace[768089845]: ---"Txn call completed" 354ms (00:28:11.413)
	Trace[768089845]: [734.530496ms] [734.530496ms] END
	
	
	==> kube-controller-manager [988ba6e93dbd] <==
	I0429 00:08:29.407024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="145.8µs"
	I0429 00:08:29.410999       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.9µs"
	I0429 00:08:29.438715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58µs"
	I0429 00:08:29.463289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.4µs"
	I0429 00:08:30.150197       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 00:08:32.178168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.562718ms"
	I0429 00:08:32.178767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="128.296µs"
	I0429 00:08:32.227761       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.198293ms"
	I0429 00:08:32.228518       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.397µs"
	I0429 00:12:22.804126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.965383ms"
	I0429 00:12:22.823038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.733135ms"
	I0429 00:12:22.823277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.2µs"
	I0429 00:12:22.828995       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.1µs"
	I0429 00:12:22.829468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.999µs"
	I0429 00:12:25.591541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.187606ms"
	I0429 00:12:25.591791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="155.1µs"
	I0429 00:28:02.170352       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-267500-m03\" does not exist"
	I0429 00:28:02.230498       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-267500-m03" podCIDRs=["10.244.1.0/24"]
	I0429 00:28:05.393266       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-267500-m03"
	I0429 00:28:19.456843       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-267500-m03"
	I0429 00:28:19.485470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.1µs"
	I0429 00:28:19.487549       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.6µs"
	I0429 00:28:19.505362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.4µs"
	I0429 00:28:22.722440       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.931424ms"
	I0429 00:28:22.722950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.301µs"
	
	
	==> kube-proxy [b505176bff8d] <==
	I0429 00:08:18.378677       1 server_linux.go:69] "Using iptables proxy"
	I0429 00:08:18.445828       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.226.61"]
	I0429 00:08:18.505105       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:08:18.505147       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:08:18.505201       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:08:18.511281       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:08:18.512271       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:08:18.512309       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:08:18.516363       1 config.go:192] "Starting service config controller"
	I0429 00:08:18.517198       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:08:18.517237       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:08:18.517245       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:08:18.524551       1 config.go:319] "Starting node config controller"
	I0429 00:08:18.524570       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:08:18.618172       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 00:08:18.618299       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:08:18.624657       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8e1e8e3ae83a] <==
	W0429 00:07:59.408672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 00:07:59.409434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 00:07:59.614629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.614883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.614630       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.616141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.671538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.671604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.688105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 00:07:59.688348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 00:07:59.699454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 00:07:59.699500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 00:07:59.827114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 00:07:59.827663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 00:07:59.863569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 00:07:59.864226       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 00:07:59.922434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 00:07:59.922488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 00:07:59.934988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 00:07:59.935206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 00:07:59.935823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 00:07:59.936001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 00:07:59.940321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 00:07:59.940831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0429 00:08:01.614591       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 00:26:02 ha-267500 kubelet[2223]: E0429 00:26:02.769707    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:26:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:26:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:26:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:26:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:27:02 ha-267500 kubelet[2223]: E0429 00:27:02.769330    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:27:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:27:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:27:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:27:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:28:02 ha-267500 kubelet[2223]: E0429 00:28:02.772180    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:28:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:28:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:28:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:28:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:29:02 ha-267500 kubelet[2223]: E0429 00:29:02.767197    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:29:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:29:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:29:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:29:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:30:02 ha-267500 kubelet[2223]: E0429 00:30:02.771457    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:30:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:30:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:30:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:30:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:29:57.956215   11736 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500: (11.6753589s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-267500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-wg44s
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-267500 describe pod busybox-fc5497c4f-wg44s
helpers_test.go:282: (dbg) kubectl --context ha-267500 describe pod busybox-fc5497c4f-wg44s:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-wg44s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bv7kl (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-bv7kl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m46s (x5 over 17m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  109s (x2 over 119s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (50.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (66.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 status --output json -v=7 --alsologtostderr
E0428 17:30:36.422948    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
ha_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-267500 status --output json -v=7 --alsologtostderr: exit status 2 (34.0030491s)

                                                
                                                
-- stdout --
	[{"Name":"ha-267500","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-267500-m02","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false},{"Name":"ha-267500-m03","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:30:18.813800    9512 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0428 17:30:18.822127    9512 out.go:291] Setting OutFile to fd 1020 ...
	I0428 17:30:18.822487    9512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:30:18.822487    9512 out.go:304] Setting ErrFile to fd 1612...
	I0428 17:30:18.822487    9512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:30:18.845512    9512 out.go:298] Setting JSON to true
	I0428 17:30:18.845726    9512 mustload.go:65] Loading cluster: ha-267500
	I0428 17:30:18.845726    9512 notify.go:220] Checking for updates...
	I0428 17:30:18.846385    9512 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:30:18.846385    9512 status.go:255] checking status of ha-267500 ...
	I0428 17:30:18.847921    9512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:30:20.892646    9512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:30:20.892839    9512 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:30:20.892839    9512 status.go:330] ha-267500 host status = "Running" (err=<nil>)
	I0428 17:30:20.892839    9512 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:30:20.893710    9512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:30:23.025443    9512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:30:23.025443    9512 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:30:23.025443    9512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:30:25.496801    9512 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:30:25.497642    9512 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:30:25.497886    9512 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:30:25.510940    9512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0428 17:30:25.510940    9512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:30:27.541583    9512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:30:27.542266    9512 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:30:27.542369    9512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:30:30.033929    9512 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:30:30.033929    9512 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:30:30.034746    9512 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:30:30.137742    9512 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6267937s)
	I0428 17:30:30.152164    9512 ssh_runner.go:195] Run: systemctl --version
	I0428 17:30:30.176057    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 17:30:30.206128    9512 kubeconfig.go:125] found "ha-267500" server: "https://172.27.239.254:8443"
	I0428 17:30:30.206339    9512 api_server.go:166] Checking apiserver status ...
	I0428 17:30:30.220490    9512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 17:30:30.264097    9512 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2086/cgroup
	W0428 17:30:30.285386    9512 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2086/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0428 17:30:30.298079    9512 ssh_runner.go:195] Run: ls
	I0428 17:30:30.306636    9512 api_server.go:253] Checking apiserver healthz at https://172.27.239.254:8443/healthz ...
	I0428 17:30:30.313679    9512 api_server.go:279] https://172.27.239.254:8443/healthz returned 200:
	ok
	I0428 17:30:30.313679    9512 status.go:422] ha-267500 apiserver status = Running (err=<nil>)
	I0428 17:30:30.313679    9512 status.go:257] ha-267500 status: &{Name:ha-267500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0428 17:30:30.314210    9512 status.go:255] checking status of ha-267500-m02 ...
	I0428 17:30:30.315095    9512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:30:32.375355    9512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:30:32.375649    9512 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:30:32.375649    9512 status.go:330] ha-267500-m02 host status = "Running" (err=<nil>)
	I0428 17:30:32.375649    9512 host.go:66] Checking if "ha-267500-m02" exists ...
	I0428 17:30:32.376509    9512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:30:34.474686    9512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:30:34.474686    9512 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:30:34.475299    9512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:30:36.908470    9512 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:30:36.908651    9512 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:30:36.908651    9512 host.go:66] Checking if "ha-267500-m02" exists ...
	I0428 17:30:36.922856    9512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0428 17:30:36.922856    9512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:30:38.958970    9512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:30:38.958970    9512 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:30:38.958970    9512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:30:41.412901    9512 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:30:41.412960    9512 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:30:41.412960    9512 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:30:41.511794    9512 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.5889298s)
	I0428 17:30:41.525385    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 17:30:41.555006    9512 kubeconfig.go:125] found "ha-267500" server: "https://172.27.239.254:8443"
	I0428 17:30:41.555268    9512 api_server.go:166] Checking apiserver status ...
	I0428 17:30:41.567668    9512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0428 17:30:41.591311    9512 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0428 17:30:41.591311    9512 status.go:422] ha-267500-m02 apiserver status = Stopped (err=<nil>)
	I0428 17:30:41.591311    9512 status.go:257] ha-267500-m02 status: &{Name:ha-267500-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0428 17:30:41.591428    9512 status.go:255] checking status of ha-267500-m03 ...
	I0428 17:30:41.592455    9512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m03 ).state
	I0428 17:30:43.618302    9512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:30:43.619310    9512 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:30:43.619415    9512 status.go:330] ha-267500-m03 host status = "Running" (err=<nil>)
	I0428 17:30:43.619462    9512 host.go:66] Checking if "ha-267500-m03" exists ...
	I0428 17:30:43.620233    9512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m03 ).state
	I0428 17:30:45.644298    9512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:30:45.644298    9512 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:30:45.644681    9512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m03 ).networkadapters[0]).ipaddresses[0]
	I0428 17:30:48.059614    9512 main.go:141] libmachine: [stdout =====>] : 172.27.233.131
	
	I0428 17:30:48.060642    9512 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:30:48.060642    9512 host.go:66] Checking if "ha-267500-m03" exists ...
	I0428 17:30:48.075415    9512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0428 17:30:48.075415    9512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m03 ).state
	I0428 17:30:50.070257    9512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:30:50.070747    9512 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:30:50.070747    9512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m03 ).networkadapters[0]).ipaddresses[0]
	I0428 17:30:52.511176    9512 main.go:141] libmachine: [stdout =====>] : 172.27.233.131
	
	I0428 17:30:52.511655    9512 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:30:52.511969    9512 sshutil.go:53] new ssh client: &{IP:172.27.233.131 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m03\id_rsa Username:docker}
	I0428 17:30:52.610845    9512 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.5354215s)
	I0428 17:30:52.623497    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 17:30:52.649805    9512 status.go:257] ha-267500-m03 status: &{Name:ha-267500-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:328: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-267500 status --output json -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500: (11.4375757s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-267500 logs -n 25: (7.9097147s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT | 28 Apr 24 17:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT | 28 Apr 24 17:24 PDT |
	|         | busybox-fc5497c4f-5xln2              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-5xln2 -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.224.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-wg44s              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-267500 -v=7                | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:25 PDT | 28 Apr 24 17:28 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 17:05:00
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 17:05:00.635889   15128 out.go:291] Setting OutFile to fd 1448 ...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.636883   15128 out.go:304] Setting ErrFile to fd 980...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.660527   15128 out.go:298] Setting JSON to false
	I0428 17:05:00.664060   15128 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6543,"bootTime":1714342556,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 17:05:00.664060   15128 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 17:05:00.669160   15128 out.go:177] * [ha-267500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 17:05:00.673143   15128 notify.go:220] Checking for updates...
	I0428 17:05:00.675298   15128 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:05:00.677914   15128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 17:05:00.680526   15128 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 17:05:00.682871   15128 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 17:05:00.686326   15128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 17:05:00.689521   15128 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 17:05:05.728109   15128 out.go:177] * Using the hyperv driver based on user configuration
	I0428 17:05:05.733726   15128 start.go:297] selected driver: hyperv
	I0428 17:05:05.733726   15128 start.go:901] validating driver "hyperv" against <nil>
	I0428 17:05:05.733888   15128 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 17:05:05.779166   15128 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 17:05:05.780739   15128 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 17:05:05.780739   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:05:05.780739   15128 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0428 17:05:05.780739   15128 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0428 17:05:05.780739   15128 start.go:340] cluster config:
	{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:05:05.781443   15128 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 17:05:05.786272   15128 out.go:177] * Starting "ha-267500" primary control-plane node in "ha-267500" cluster
	I0428 17:05:05.789365   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:05:05.790343   15128 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 17:05:05.790343   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:05:05.790810   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:05:05.791000   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:05:05.791210   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:05:05.791210   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json: {Name:mk9d04dce876aeea74569e2a12d8158542a180a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:360] acquireMachinesLock for ha-267500: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500"
	I0428 17:05:05.793473   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:05:05.793473   15128 start.go:125] createHost starting for "" (driver="hyperv")
	I0428 17:05:05.798458   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:05:05.798458   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:05:05.799075   15128 client.go:168] LocalClient.Create starting
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:05:07.765342   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:05:07.765366   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:07.765483   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:05:09.466609   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:10.942750   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:14.309202   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:05:14.797607   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: Creating VM...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:17.596457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:17.596534   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:17.596629   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:05:17.596740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:19.370912   15128 main.go:141] libmachine: Creating VHD
	I0428 17:05:19.370912   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:05:22.987163   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6323F08D-1941-41F6-AECD-59FDB38477C4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:05:22.987787   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:22.987787   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:05:22.987950   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:05:22.997062   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:05:26.067081   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:26.067395   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:26.067482   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -SizeBytes 20000MB
	I0428 17:05:28.607147   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-267500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:32.186340   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500 -DynamicMemoryEnabled $false
	I0428 17:05:34.304828   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500 -Count 2
	I0428 17:05:36.364288   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:36.365155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:36.365244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\boot2docker.iso'
	I0428 17:05:38.788294   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd'
	I0428 17:05:41.250474   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: Starting VM...
	I0428 17:05:41.251660   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:48.796976   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:48.797051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:49.812421   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:51.911514   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:51.912240   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:51.912333   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:54.389553   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:54.389603   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:55.396985   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:57.532241   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:59.865311   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:59.865354   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:00.867371   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:06.311485   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:10.915736   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:10.916779   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:10.916848   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:12.945722   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:14.977649   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:17.403860   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:17.413822   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:17.413822   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:06:17.548827   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:06:17.549001   15128 buildroot.go:166] provisioning hostname "ha-267500"
	I0428 17:06:17.549001   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:21.963707   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:21.963891   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:21.969614   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:21.970234   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:21.970287   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500 && echo "ha-267500" | sudo tee /etc/hostname
	I0428 17:06:22.125673   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500
	
	I0428 17:06:22.125673   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:24.116148   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:26.498042   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:26.498298   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:26.504621   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:26.505426   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:26.505426   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:06:26.654593   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:06:26.654745   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:06:26.654745   15128 buildroot.go:174] setting up certificates
	I0428 17:06:26.654878   15128 provision.go:84] configureAuth start
	I0428 17:06:26.654974   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:28.643033   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:31.047712   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:33.032385   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:33.033114   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:33.033244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:35.470487   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:35.470551   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:35.470602   15128 provision.go:143] copyHostCerts
	I0428 17:06:35.470602   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:06:35.470602   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:06:35.470602   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:06:35.471409   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:06:35.472302   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:06:35.472302   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:06:35.474368   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:06:35.475508   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:06:35.477084   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500 san=[127.0.0.1 172.27.226.61 ha-267500 localhost minikube]
	I0428 17:06:35.561808   15128 provision.go:177] copyRemoteCerts
	I0428 17:06:35.577487   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:06:35.577487   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:37.564802   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:40.009619   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:06:40.122812   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5453174s)
	I0428 17:06:40.122812   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:06:40.124516   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:06:40.170921   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:06:40.171551   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0428 17:06:40.219603   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:06:40.219603   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:06:40.266084   15128 provision.go:87] duration metric: took 13.6111193s to configureAuth
	I0428 17:06:40.266084   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:06:40.266857   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:06:40.267021   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:42.241914   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:44.637923   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:44.637923   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:44.637923   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:06:44.774113   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:06:44.774113   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:06:44.774113   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:06:44.774650   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:46.777708   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:46.778317   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:46.778401   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:49.187437   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:49.187970   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:49.188102   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:06:49.338418   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:06:49.339201   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:51.331459   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:53.762358   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:53.763024   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:53.763024   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:06:55.964469   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:06:55.964469   15128 machine.go:97] duration metric: took 43.0186778s to provisionDockerMachine
	I0428 17:06:55.964469   15128 client.go:171] duration metric: took 1m50.1652174s to LocalClient.Create
	I0428 17:06:55.964469   15128 start.go:167] duration metric: took 1m50.1658343s to libmachine.API.Create "ha-267500"
	I0428 17:06:55.965115   15128 start.go:293] postStartSetup for "ha-267500" (driver="hyperv")
	I0428 17:06:55.965216   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:06:55.979546   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:06:55.979546   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:57.968316   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:57.969137   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:57.969264   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:00.415449   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:00.415502   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:00.415502   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:00.529139   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5495858s)
	I0428 17:07:00.542143   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:07:00.550032   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:07:00.550213   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:07:00.550570   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:07:00.551284   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:07:00.551284   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:07:00.565509   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:07:00.584743   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:07:00.629457   15128 start.go:296] duration metric: took 4.6642336s for postStartSetup
	I0428 17:07:00.635014   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:02.626728   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:02.627487   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:02.627874   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:05.092989   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:05.093104   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:05.093386   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:07:05.096398   15128 start.go:128] duration metric: took 1m59.3027333s to createHost
	I0428 17:07:05.096398   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:07.065139   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:07.066155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:07.066393   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:09.551453   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:09.552365   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:09.558305   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:09.559011   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:09.559011   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:07:09.695211   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349229.688972111
	
	I0428 17:07:09.695211   15128 fix.go:216] guest clock: 1714349229.688972111
	I0428 17:07:09.695293   15128 fix.go:229] Guest: 2024-04-28 17:07:09.688972111 -0700 PDT Remote: 2024-04-28 17:07:05.096398 -0700 PDT m=+124.563135001 (delta=4.592574111s)
	I0428 17:07:09.695407   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:11.789797   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:11.789847   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:11.789990   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:14.240619   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:14.240815   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:14.240815   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349229
	I0428 17:07:14.381527   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:07:09 UTC 2024
	
	I0428 17:07:14.381591   15128 fix.go:236] clock set: Mon Apr 29 00:07:09 UTC 2024
	 (err=<nil>)
	I0428 17:07:14.381591   15128 start.go:83] releasing machines lock for "ha-267500", held for 2m8.5881066s
	I0428 17:07:14.381888   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:16.379116   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:18.842518   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:07:18.842698   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:18.852567   15128 ssh_runner.go:195] Run: cat /version.json
	I0428 17:07:18.853571   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.911012   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:20.912913   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.913115   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.913211   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:23.515321   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.515423   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.515870   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.545848   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: cat /version.json: (4.8814384s)
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8914872s)
	I0428 17:07:23.747746   15128 ssh_runner.go:195] Run: systemctl --version
	I0428 17:07:23.771255   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 17:07:23.781524   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:07:23.793701   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:07:23.822613   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:07:23.822613   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:23.822613   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:23.866813   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:07:23.903238   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:07:23.922743   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:07:23.934150   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:07:23.963653   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:23.994818   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:07:24.027248   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:24.060207   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:07:24.094263   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:07:24.140407   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:07:24.173847   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:07:24.204942   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:07:24.241686   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:07:24.271540   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:24.469049   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:07:24.498779   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:24.511314   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:07:24.547731   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.585442   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:07:24.632453   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.665555   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.704256   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:07:24.766295   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.792824   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:24.839067   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:07:24.857950   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:07:24.877113   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:07:24.928235   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:07:25.145493   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:07:25.342459   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:07:25.342632   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:07:25.392872   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:25.606530   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:28.159251   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5517925s)
	I0428 17:07:28.171034   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0428 17:07:28.211210   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.251460   15128 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0428 17:07:28.457673   15128 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0428 17:07:28.655447   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:28.858401   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0428 17:07:28.905418   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.943568   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:29.150079   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0428 17:07:29.264527   15128 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0428 17:07:29.277774   15128 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0428 17:07:29.287734   15128 start.go:562] Will wait 60s for crictl version
	I0428 17:07:29.298726   15128 ssh_runner.go:195] Run: which crictl
	I0428 17:07:29.316760   15128 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 17:07:29.366950   15128 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0428 17:07:29.376977   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.418646   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.453698   15128 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0428 17:07:29.453698   15128 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d9:46:b5 Flags:up|broadcast|multicast|running}
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: fe80::ddcc:c7f1:f829:ae2f/64
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: 172.27.224.1/20
	I0428 17:07:29.473489   15128 ssh_runner.go:195] Run: grep 172.27.224.1	host.minikube.internal$ /etc/hosts
	I0428 17:07:29.479885   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:29.514603   15128 kubeadm.go:877] updating cluster {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 17:07:29.514603   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:07:29.523620   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:29.550369   15128 docker.go:685] Got preloaded images: 
	I0428 17:07:29.550483   15128 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0428 17:07:29.562702   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:29.593952   15128 ssh_runner.go:195] Run: which lz4
	I0428 17:07:29.600117   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0428 17:07:29.613555   15128 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0428 17:07:29.619890   15128 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0428 17:07:29.619890   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0428 17:07:31.519069   15128 docker.go:649] duration metric: took 1.9189486s to copy over tarball
	I0428 17:07:31.533069   15128 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0428 17:07:40.472773   15128 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9396898s)
	I0428 17:07:40.472925   15128 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0428 17:07:40.541351   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:40.567273   15128 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0428 17:07:40.619221   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:40.837523   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:44.196770   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3592418s)
	I0428 17:07:44.207767   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:44.237423   15128 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0428 17:07:44.237484   15128 cache_images.go:84] Images are preloaded, skipping loading
	I0428 17:07:44.237484   15128 kubeadm.go:928] updating node { 172.27.226.61 8443 v1.30.0 docker true true} ...
	I0428 17:07:44.237484   15128 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-267500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.226.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 17:07:44.246763   15128 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0428 17:07:44.282127   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:07:44.282216   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:07:44.282216   15128 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 17:07:44.282351   15128 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.226.61 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-267500 NodeName:ha-267500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.226.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.226.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 17:07:44.282455   15128 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.226.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-267500"
	  kubeletExtraArgs:
	    node-ip: 172.27.226.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.226.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 17:07:44.282455   15128 kube-vip.go:111] generating kube-vip config ...
	I0428 17:07:44.297487   15128 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0428 17:07:44.321501   15128 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0428 17:07:44.322489   15128 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0428 17:07:44.337281   15128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 17:07:44.356448   15128 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 17:07:44.368828   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0428 17:07:44.388733   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0428 17:07:44.419285   15128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 17:07:44.454529   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0428 17:07:44.492910   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0428 17:07:44.535119   15128 ssh_runner.go:195] Run: grep 172.27.239.254	control-plane.minikube.internal$ /etc/hosts
	I0428 17:07:44.544353   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:44.584071   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:44.784658   15128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 17:07:44.813138   15128 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500 for IP: 172.27.226.61
	I0428 17:07:44.813138   15128 certs.go:194] generating shared ca certs ...
	I0428 17:07:44.813138   15128 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:44.814022   15128 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0428 17:07:44.814402   15128 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0428 17:07:44.814630   15128 certs.go:256] generating profile certs ...
	I0428 17:07:44.815376   15128 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key
	I0428 17:07:44.815452   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt with IP's: []
	I0428 17:07:45.207682   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt ...
	I0428 17:07:45.207682   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt: {Name:mkad69168dad75f83e0efa34e0b67056be851f25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.209661   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key ...
	I0428 17:07:45.209661   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key: {Name:mkb880ba41d02f89477ac0bc036a3238bb214c19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.210642   15128 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3
	I0428 17:07:45.211691   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.226.61 172.27.239.254]
	I0428 17:07:45.272240   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 ...
	I0428 17:07:45.272240   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3: {Name:mk99fb8942eac42f7e59971118a5e983aa693542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.273362   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 ...
	I0428 17:07:45.273362   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3: {Name:mkdcebf54b68db40ea28398d3bc9d7030e2380c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.274711   15128 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt
	I0428 17:07:45.286842   15128 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key
	I0428 17:07:45.287930   15128 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key
	I0428 17:07:45.288916   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt with IP's: []
	I0428 17:07:45.392345   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt ...
	I0428 17:07:45.392345   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt: {Name:mk043c6e778c0a46cac3b2815bc508f265aae077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.394630   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key ...
	I0428 17:07:45.394630   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key: {Name:mk9cbeba2bc7745cd3561dc98b61ab1be7e0e2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.395971   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 17:07:45.396701   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 17:07:45.396840   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 17:07:45.396982   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 17:07:45.397123   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 17:07:45.404414   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 17:07:45.405312   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem (1338 bytes)
	W0428 17:07:45.405975   15128 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228_empty.pem, impossibly tiny 0 bytes
	I0428 17:07:45.406015   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0428 17:07:45.406268   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0428 17:07:45.406623   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0428 17:07:45.406886   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0428 17:07:45.407157   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem (1708 bytes)
	I0428 17:07:45.407157   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /usr/share/ca-certificates/32282.pem
	I0428 17:07:45.407872   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:45.408049   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem -> /usr/share/ca-certificates/3228.pem
	I0428 17:07:45.408290   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 17:07:45.465598   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0428 17:07:45.514624   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 17:07:45.563309   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 17:07:45.610689   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0428 17:07:45.668205   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0428 17:07:45.709224   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 17:07:45.760227   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0428 17:07:45.808948   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /usr/share/ca-certificates/32282.pem (1708 bytes)
	I0428 17:07:45.867908   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 17:07:45.915616   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem --> /usr/share/ca-certificates/3228.pem (1338 bytes)
	I0428 17:07:45.964791   15128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 17:07:46.023214   15128 ssh_runner.go:195] Run: openssl version
	I0428 17:07:46.048823   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32282.pem && ln -fs /usr/share/ca-certificates/32282.pem /etc/ssl/certs/32282.pem"
	I0428 17:07:46.088573   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.097176   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.109096   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.132635   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32282.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 17:07:46.166258   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 17:07:46.204585   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.212881   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.228291   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.251359   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 17:07:46.286250   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3228.pem && ln -fs /usr/share/ca-certificates/3228.pem /etc/ssl/certs/3228.pem"
	I0428 17:07:46.330437   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.337213   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.348616   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.369695   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3228.pem /etc/ssl/certs/51391683.0"
	I0428 17:07:46.404629   15128 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 17:07:46.416103   15128 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 17:07:46.416103   15128 kubeadm.go:391] StartCluster: {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:07:46.427776   15128 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 17:07:46.462126   15128 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 17:07:46.492998   15128 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 17:07:46.525017   15128 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 17:07:46.543389   15128 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 17:07:46.543449   15128 kubeadm.go:156] found existing configuration files:
	
	I0428 17:07:46.559558   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 17:07:46.576906   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 17:07:46.591547   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 17:07:46.622617   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 17:07:46.643274   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 17:07:46.657479   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 17:07:46.687575   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.704724   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 17:07:46.717169   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.749254   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 17:07:46.767247   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 17:07:46.779268   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 17:07:46.798138   15128 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0428 17:07:47.295492   15128 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0428 17:08:03.206037   15128 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0428 17:08:03.206217   15128 kubeadm.go:309] [preflight] Running pre-flight checks
	I0428 17:08:03.206547   15128 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0428 17:08:03.206720   15128 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0428 17:08:03.207017   15128 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0428 17:08:03.207166   15128 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 17:08:03.211078   15128 out.go:204]   - Generating certificates and keys ...
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0428 17:08:03.212047   15128 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0428 17:08:03.212253   15128 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0428 17:08:03.212452   15128 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.213396   15128 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 17:08:03.214403   15128 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 17:08:03.214647   15128 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 17:08:03.217496   15128 out.go:204]   - Booting up control plane ...
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 17:08:03.218523   15128 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 17:08:03.218673   15128 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0428 17:08:03.218845   15128 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002004724s
	I0428 17:08:03.219380   15128 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0428 17:08:03.219512   15128 kubeadm.go:309] [api-check] The API server is healthy after 9.018382318s
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0428 17:08:03.219547   15128 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0428 17:08:03.219547   15128 kubeadm.go:309] [mark-control-plane] Marking the node ha-267500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0428 17:08:03.219547   15128 kubeadm.go:309] [bootstrap-token] Using token: o2t0fz.gqoxv8rhmbtgnafl
	I0428 17:08:03.222077   15128 out.go:204]   - Configuring RBAC rules ...
	I0428 17:08:03.223255   15128 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0428 17:08:03.223390   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0428 17:08:03.223700   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0428 17:08:03.224022   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0428 17:08:03.224356   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0428 17:08:03.224673   15128 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0428 17:08:03.224822   15128 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0428 17:08:03.224822   15128 kubeadm.go:309] 
	I0428 17:08:03.224822   15128 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0428 17:08:03.225393   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.226084   15128 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0428 17:08:03.226084   15128 kubeadm.go:309] 
	I0428 17:08:03.226252   15128 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0428 17:08:03.226279   15128 kubeadm.go:309] 
	I0428 17:08:03.226368   15128 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0428 17:08:03.226368   15128 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0428 17:08:03.226368   15128 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0428 17:08:03.226368   15128 kubeadm.go:309] 
	I0428 17:08:03.226941   15128 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0428 17:08:03.227102   15128 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0428 17:08:03.227102   15128 kubeadm.go:309] 
	I0428 17:08:03.227370   15128 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--control-plane 
	I0428 17:08:03.227509   15128 kubeadm.go:309] 
	I0428 17:08:03.227814   15128 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0428 17:08:03.227814   15128 kubeadm.go:309] 
	I0428 17:08:03.228020   15128 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.228020   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c 
	I0428 17:08:03.228020   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:08:03.228020   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:08:03.230920   15128 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0428 17:08:03.245586   15128 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0428 17:08:03.254991   15128 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0428 17:08:03.255049   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0428 17:08:03.307618   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0428 17:08:04.087321   15128 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 17:08:04.101185   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-267500 minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=ha-267500 minikube.k8s.io/primary=true
	I0428 17:08:04.110392   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.127454   15128 ops.go:34] apiserver oom_adj: -16
	I0428 17:08:04.338961   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.853452   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.339051   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.843300   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.345394   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.842588   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.347466   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.845426   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.343954   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.844666   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.346016   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.847106   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.346157   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.852073   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.350599   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.851124   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.339498   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.839469   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.341674   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.844363   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.340478   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.840892   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.351020   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.542789   15128 kubeadm.go:1107] duration metric: took 11.4553488s to wait for elevateKubeSystemPrivileges
	W0428 17:08:15.542884   15128 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0428 17:08:15.542948   15128 kubeadm.go:393] duration metric: took 29.1267984s to StartCluster
	I0428 17:08:15.542948   15128 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.543147   15128 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:15.545087   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.546714   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0428 17:08:15.546792   15128 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:15.546862   15128 start.go:240] waiting for startup goroutines ...
	I0428 17:08:15.546921   15128 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0428 17:08:15.547043   15128 addons.go:69] Setting storage-provisioner=true in profile "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:234] Setting addon storage-provisioner=true in "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:69] Setting default-storageclass=true in profile "ha-267500"
	I0428 17:08:15.547186   15128 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-267500"
	I0428 17:08:15.547186   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:15.547418   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.760123   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0428 17:08:16.117515   15128 start.go:946] {"host.minikube.internal": 172.27.224.1} host record injected into CoreDNS's ConfigMap
	I0428 17:08:17.727218   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.731020   15128 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 17:08:17.728718   15128 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:17.731866   15128 kapi.go:59] client config for ha-267500: &rest.Config{Host:"https://172.27.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 17:08:17.733765   15128 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:17.733849   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0428 17:08:17.733849   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:17.735131   15128 cert_rotation.go:137] Starting client certificate rotation controller
	I0428 17:08:17.735131   15128 addons.go:234] Setting addon default-storageclass=true in "ha-267500"
	I0428 17:08:17.735756   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:17.736495   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.022150   15128 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:20.022150   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.024648   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.176019   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:22.176993   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.177104   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.649653   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:22.838833   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:23.942043   15128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1032083s)
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:24.736869   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:24.878922   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:25.036824   15128 round_trippers.go:463] GET https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0428 17:08:25.036824   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.036824   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.036824   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.047850   15128 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0428 17:08:25.050270   15128 round_trippers.go:463] PUT https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0428 17:08:25.050270   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Content-Type: application/json
	I0428 17:08:25.050270   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.054895   15128 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 17:08:25.058644   15128 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0428 17:08:25.062323   15128 addons.go:505] duration metric: took 9.5154456s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0428 17:08:25.062323   15128 start.go:245] waiting for cluster config update ...
	I0428 17:08:25.062323   15128 start.go:254] writing updated cluster config ...
	I0428 17:08:25.064855   15128 out.go:177] 
	I0428 17:08:25.074876   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:25.074876   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.081680   15128 out.go:177] * Starting "ha-267500-m02" control-plane node in "ha-267500" cluster
	I0428 17:08:25.084831   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:08:25.084949   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:08:25.085245   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:08:25.085467   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:08:25.085668   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.089909   15128 start.go:360] acquireMachinesLock for ha-267500-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:08:25.089909   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500-m02"
	I0428 17:08:25.089909   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:25.089909   15128 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0428 17:08:25.092669   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:08:25.092669   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:08:25.092669   15128 client.go:168] LocalClient.Create starting
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:08:26.932082   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:08:26.932249   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:26.932469   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:08:28.625007   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:08:28.625741   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:28.625836   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:30.145128   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:30.145193   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:30.145352   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:33.641047   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:33.641341   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:33.643919   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:08:34.107074   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:08:34.283136   15128 main.go:141] libmachine: Creating VM...
	I0428 17:08:34.284168   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:37.085226   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:37.085497   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:38.799740   15128 main.go:141] libmachine: Creating VHD
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1C4811B2-F108-4C17-8C85-240087500FFB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:08:42.443176   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:08:45.530814   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -SizeBytes 20000MB
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:08:51.507051   15128 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-267500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:08:51.507121   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:51.507184   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500-m02 -DynamicMemoryEnabled $false
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:53.623959   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500-m02 -Count 2
	I0428 17:08:55.746706   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:55.747282   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:55.747376   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\boot2docker.iso'
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:58.231298   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd'
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: Starting VM...
	I0428 17:09:00.819246   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500-m02
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:08.535107   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:08.535676   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:09.540110   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:11.730252   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:11.730767   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:11.730896   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:14.267320   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:14.267920   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:15.278102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:17.429662   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:19.872667   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:19.873239   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:20.874059   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:23.049283   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:25.483021   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:25.483840   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:26.497330   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:28.593193   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:31.092830   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:33.155893   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:33.156190   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:33.156190   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:09:33.156343   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:37.708958   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:37.709094   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:37.715262   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:37.715453   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:37.715453   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:09:37.838307   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:09:37.838307   15128 buildroot.go:166] provisioning hostname "ha-267500-m02"
	I0428 17:09:37.838307   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:39.845337   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:39.845507   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:39.845582   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:42.372033   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:42.372654   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:42.379934   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:42.380083   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:42.380083   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500-m02 && echo "ha-267500-m02" | sudo tee /etc/hostname
	I0428 17:09:42.534583   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500-m02
	
	I0428 17:09:42.534727   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:44.674240   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:47.257595   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:47.258189   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:47.258189   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:09:47.404787   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:09:47.404787   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:09:47.404787   15128 buildroot.go:174] setting up certificates
	I0428 17:09:47.404787   15128 provision.go:84] configureAuth start
	I0428 17:09:47.404787   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:51.875853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:53.926853   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:53.927030   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:53.927102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:56.411706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:56.412682   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:56.412682   15128 provision.go:143] copyHostCerts
	I0428 17:09:56.412881   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:09:56.413201   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:09:56.413201   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:09:56.413699   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:09:56.414916   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:09:56.415172   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:09:56.417043   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:09:56.417043   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:09:56.417043   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:09:56.417691   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:09:56.418448   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500-m02 san=[127.0.0.1 172.27.238.86 ha-267500-m02 localhost minikube]
	I0428 17:09:56.698158   15128 provision.go:177] copyRemoteCerts
	I0428 17:09:56.713232   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:09:56.713232   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:58.727438   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:58.728437   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:58.728572   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:01.200219   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:01.303703   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5904121s)
	I0428 17:10:01.303703   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:10:01.304216   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:10:01.351115   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:10:01.351613   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0428 17:10:01.399941   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:10:01.400279   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:10:01.447643   15128 provision.go:87] duration metric: took 14.0428334s to configureAuth
	I0428 17:10:01.447643   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:10:01.448198   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:10:01.448388   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:03.470041   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:05.925618   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:05.926194   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:05.926194   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:10:06.056503   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:10:06.056605   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:10:06.056795   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:10:06.056855   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:08.084596   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:10.593844   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:10.594210   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:10.600708   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:10.601470   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:10.601470   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.226.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:10:10.751881   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.226.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:10:10.751947   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:12.904363   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:15.479691   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:15.479915   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:15.486849   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:15.487030   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:15.487030   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:10:17.663081   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:10:17.663081   15128 machine.go:97] duration metric: took 44.506824s to provisionDockerMachine
	I0428 17:10:17.663081   15128 client.go:171] duration metric: took 1m52.570239s to LocalClient.Create
	I0428 17:10:17.663081   15128 start.go:167] duration metric: took 1m52.570239s to libmachine.API.Create "ha-267500"
	I0428 17:10:17.663081   15128 start.go:293] postStartSetup for "ha-267500-m02" (driver="hyperv")
	I0428 17:10:17.663081   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:10:17.677002   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:10:17.677002   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:19.758853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:22.318985   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:22.423330   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7463207s)
	I0428 17:10:22.436053   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:10:22.443505   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:10:22.443505   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:10:22.444052   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:10:22.445207   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:10:22.445207   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:10:22.458722   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:10:22.477786   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:10:22.526087   15128 start.go:296] duration metric: took 4.8629979s for postStartSetup
	I0428 17:10:22.528901   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:27.084100   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:10:27.086385   15128 start.go:128] duration metric: took 2m1.9962875s to createHost
	I0428 17:10:27.086385   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:29.131174   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:31.572065   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:31.572369   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:31.578077   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:31.578656   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:31.578656   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349431.710726684
	
	I0428 17:10:31.707789   15128 fix.go:216] guest clock: 1714349431.710726684
	I0428 17:10:31.707789   15128 fix.go:229] Guest: 2024-04-28 17:10:31.710726684 -0700 PDT Remote: 2024-04-28 17:10:27.0863856 -0700 PDT m=+326.552805801 (delta=4.624341084s)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:36.218864   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:36.219399   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:36.219663   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349431
	I0428 17:10:36.353520   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:10:31 UTC 2024
	
	I0428 17:10:36.353602   15128 fix.go:236] clock set: Mon Apr 29 00:10:31 UTC 2024
	 (err=<nil>)
	I0428 17:10:36.353602   15128 start.go:83] releasing machines lock for "ha-267500-m02", held for 2m11.26349s
	I0428 17:10:36.353795   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:38.401891   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:40.883767   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:40.883929   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:40.887007   15128 out.go:177] * Found network options:
	I0428 17:10:40.889514   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.892316   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.894427   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.897007   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 17:10:40.898142   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.900035   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:10:40.900035   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:40.912127   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 17:10:40.913152   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:43.021173   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.602076   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.622078   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.622258   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.622506   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.694842   15128 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7816825s)
	W0428 17:10:45.694980   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:10:45.707857   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:10:45.811368   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:10:45.811368   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:45.811368   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.911325s)
	I0428 17:10:45.811813   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:45.869634   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:10:45.905032   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:10:45.930324   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:10:45.946027   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:10:45.978279   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.013710   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:10:46.061695   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.102008   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:10:46.135573   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:10:46.171642   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:10:46.204807   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:10:46.239021   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:10:46.271655   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:10:46.306942   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:46.514038   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:10:46.544941   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:46.560491   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:10:46.605547   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.654104   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:10:46.708544   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.748048   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.784762   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:10:46.849187   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.873497   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:46.927545   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:10:46.944545   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:10:46.962213   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:10:47.010730   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:10:47.237397   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:10:47.429784   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:10:47.429870   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:10:47.474822   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:47.662962   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:11:48.797471   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1344114s)
	I0428 17:11:48.811984   15128 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0428 17:11:48.846867   15128 out.go:177] 
	W0428 17:11:48.851004   15128 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 00:10:16 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.119534579Z" level=info msg="Starting up"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.120740894Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.121661806Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.164120251Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189883081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189945482Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190009182Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190026683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190220685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190263486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190520589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190669591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190716191Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190728492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190839193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.191192898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194247737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194367638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194558841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194663742Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194795944Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195368451Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195462552Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220446573Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220530874Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220815977Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220940379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220961379Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221231583Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221822990Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222033793Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222143394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222181895Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222200695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222229595Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222251396Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222320897Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222367097Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222383497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222398798Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222414398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222438198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222458898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222474399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222508799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222524499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222540899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222555500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222572000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222588200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222612300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222628301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222643801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222659801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222679401Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222703802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222745302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222782703Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222911604Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222975905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222992605Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223005105Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223156807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223197908Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223212708Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229340687Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229588390Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.230467901Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.231131810Z" level=info msg="containerd successfully booted in 0.070317s"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.196765446Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.225741894Z" level=info msg="Loading containers: start."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.520224287Z" level=info msg="Loading containers: done."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.548826467Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.549157372Z" level=info msg="Daemon has completed initialization"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663745997Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663852398Z" level=info msg="API listen on [::]:2376"
	Apr 29 00:10:17 ha-267500-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 00:10:47 ha-267500-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.694032846Z" level=info msg="Processing signal 'terminated'"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696514258Z" level=info msg="Daemon shutdown complete"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696708859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696755859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696775959Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:48 ha-267500-m02 dockerd[1016]: time="2024-04-29T00:10:48.770678285Z" level=info msg="Starting up"
	Apr 29 00:11:48 ha-267500-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0428 17:11:48.851004   15128 out.go:239] * 
	W0428 17:11:48.852842   15128 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 17:11:48.855427   15128 out.go:177] 
	
	
	==> Docker <==
	Apr 29 00:24:55 ha-267500 dockerd[1316]: 2024/04/29 00:24:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:15 ha-267500 dockerd[1316]: 2024/04/29 00:29:15 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:15 ha-267500 dockerd[1316]: 2024/04/29 00:29:15 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8d1eabc40263       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   9e5d506c62d64       busybox-fc5497c4f-5xln2
	863860b786b42       cbb01a7bd410d                                                                                         22 minutes ago      Running             coredns                   0                   c1f590ad490fe       coredns-7db6d8ff4d-p7tjz
	f85260746d557       cbb01a7bd410d                                                                                         22 minutes ago      Running             coredns                   0                   586f91a6b0d3d       coredns-7db6d8ff4d-2d6ct
	f23ff280b691c       6e38f40d628db                                                                                         22 minutes ago      Running             storage-provisioner       0                   4f7c6837c24bd       storage-provisioner
	31e97721c439f       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              22 minutes ago      Running             kindnet-cni               0                   9a810f16fad2b       kindnet-6pr2b
	b505176bff8dd       a0bf559e280cf                                                                                         22 minutes ago      Running             kube-proxy                0                   f041e2ebf6955       kube-proxy-59kz7
	e8de8cc5d0941       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     23 minutes ago      Running             kube-vip                  0                   5e6adedaca2d1       kube-vip-ha-267500
	1bb77467f58fc       3861cfcd7c04c                                                                                         23 minutes ago      Running             etcd                      0                   bd2f63e7ff884       etcd-ha-267500
	e3f1a76ec8d43       c42f13656d0b2                                                                                         23 minutes ago      Running             kube-apiserver            0                   1aac39df0e147       kube-apiserver-ha-267500
	8e1e8e3ae83a4       259c8277fcbbc                                                                                         23 minutes ago      Running             kube-scheduler            0                   59e9e09e1fe2e       kube-scheduler-ha-267500
	988ba6e93dbd2       c7aad43836fa5                                                                                         23 minutes ago      Running             kube-controller-manager   0                   b062edd237fa4       kube-controller-manager-ha-267500
	
	
	==> coredns [863860b786b4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56042 - 38920 "HINFO IN 6310058863699759000.886894576477842994. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026858243s
	[INFO] 10.244.0.4:52183 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.109912239s
	[INFO] 10.244.0.4:36966 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.019781143s
	[INFO] 10.244.0.4:50436 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.124688347s
	[INFO] 10.244.0.4:39307 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231401s
	[INFO] 10.244.0.4:48774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000438101s
	[INFO] 10.244.0.4:55657 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001919s
	[INFO] 10.244.0.4:39536 - 5 "PTR IN 1.224.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000243301s
	
	
	==> coredns [f85260746d55] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52661 - 10332 "HINFO IN 6890724632724915343.2842102422429648823. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049972505s
	[INFO] 10.244.0.4:36002 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189801s
	[INFO] 10.244.0.4:39517 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002061s
	[INFO] 10.244.0.4:58443 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.132665688s
	[INFO] 10.244.0.4:58628 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000428701s
	[INFO] 10.244.0.4:35412 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002027s
	[INFO] 10.244.0.4:55943 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.02269265s
	[INFO] 10.244.0.4:41245 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000423501s
	[INFO] 10.244.0.4:57855 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168501s
	[INFO] 10.244.0.4:59251 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000973s
	[INFO] 10.244.0.4:49224 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000193501s
	[INFO] 10.244.0.4:39630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002705s
	[INFO] 10.244.0.4:33915 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000299901s
	[INFO] 10.244.0.4:44933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000954s
	
	
	==> describe nodes <==
	Name:               ha-267500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-267500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-267500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:08:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-267500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:31:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:27:57 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:27:57 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:27:57 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:27:57 +0000   Mon, 29 Apr 2024 00:08:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.226.61
	  Hostname:    ha-267500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 077cacd754b64c3dad0beeef28749850
	  System UUID:                961ce819-6c1b-c24a-99df-3205dca32605
	  Boot ID:                    bb08693c-1f82-4307-a58c-bdcce00f2d7a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5xln2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-2d6ct             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 coredns-7db6d8ff4d-p7tjz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-ha-267500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-6pr2b                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	  kube-system                 kube-apiserver-ha-267500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-267500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-59kz7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-ha-267500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-267500                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22m   kube-proxy       
	  Normal  NodeHasSufficientMemory  23m   kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  Starting                 23m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m   kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m   kubelet          Node ha-267500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m   kubelet          Node ha-267500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m   node-controller  Node ha-267500 event: Registered Node ha-267500 in Controller
	  Normal  NodeReady                22m   kubelet          Node ha-267500 status is now: NodeReady
	
	
	Name:               ha-267500-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-267500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-267500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_28T17_28_02_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:28:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-267500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:31:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.233.131
	  Hostname:    ha-267500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0562d38ef374b73969ab15fed947e11
	  System UUID:                c94a104a-b670-854e-ac89-f41b3533cc69
	  Boot ID:                    bca10429-bddd-4547-8fb0-c50d93740969
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jxx6x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-mspbr              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m9s
	  kube-system                 kube-proxy-jcph5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m9s (x2 over 3m9s)  kubelet          Node ha-267500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m9s (x2 over 3m9s)  kubelet          Node ha-267500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m9s (x2 over 3m9s)  kubelet          Node ha-267500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-267500-m03 event: Registered Node ha-267500-m03 in Controller
	  Normal  NodeReady                2m52s                kubelet          Node ha-267500-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr29 00:06] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.760915] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.419480] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.183676] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[Apr29 00:07] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.112445] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.557599] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.220083] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.252325] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.857578] systemd-fstab-generator[1178]: Ignoring "noauto" option for root device
	[  +0.206645] systemd-fstab-generator[1190]: Ignoring "noauto" option for root device
	[  +0.195057] systemd-fstab-generator[1202]: Ignoring "noauto" option for root device
	[  +0.281554] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[ +11.671296] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.127733] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.851029] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +6.965698] systemd-fstab-generator[1723]: Ignoring "noauto" option for root device
	[  +0.101314] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.149606] kauditd_printk_skb: 67 callbacks suppressed
	[Apr29 00:08] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[ +14.798165] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.098725] kauditd_printk_skb: 29 callbacks suppressed
	[Apr29 00:12] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [1bb77467f58f] <==
	{"level":"warn","ts":"2024-04-29T00:27:56.3113Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:27:55.726454Z","time spent":"584.410622ms","remote":"127.0.0.1:52796","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":420,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:2570 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:370 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >"}
	{"level":"info","ts":"2024-04-29T00:27:56.685502Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2039}
	{"level":"info","ts":"2024-04-29T00:27:56.697671Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2039,"took":"11.701828ms","hash":3710382387,"current-db-size-bytes":2490368,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1806336,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-29T00:27:56.697806Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3710382387,"revision":2039,"compact-revision":1501}
	{"level":"warn","ts":"2024-04-29T00:28:01.015001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.747946ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11906267438321686993 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2563 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911183 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T00:28:01.015171Z","caller":"traceutil/trace.go:171","msg":"trace[1066403778] linearizableReadLoop","detail":"{readStateIndex:2839; appliedIndex:2838; }","duration":"166.002504ms","start":"2024-04-29T00:28:00.849156Z","end":"2024-04-29T00:28:01.015158Z","steps":["trace[1066403778] 'read index received'  (duration: 64.927058ms)","trace[1066403778] 'applied index is now lower than readState.Index'  (duration: 101.074346ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T00:28:01.015567Z","caller":"traceutil/trace.go:171","msg":"trace[1444860087] transaction","detail":"{read_only:false; response_revision:2582; number_of_response:1; }","duration":"309.347954ms","start":"2024-04-29T00:28:00.706202Z","end":"2024-04-29T00:28:01.01555Z","steps":["trace[1444860087] 'process raft request'  (duration: 207.946307ms)","trace[1444860087] 'compare'  (duration: 100.659345ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:01.015577Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.380105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/kube-system/bootstrap-token-46antb\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:01.015843Z","caller":"traceutil/trace.go:171","msg":"trace[1574012410] range","detail":"{range_begin:/registry/secrets/kube-system/bootstrap-token-46antb; range_end:; response_count:0; response_revision:2582; }","duration":"166.706906ms","start":"2024-04-29T00:28:00.849128Z","end":"2024-04-29T00:28:01.015834Z","steps":["trace[1574012410] 'agreement among raft nodes before linearized reading'  (duration: 166.065204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:01.015715Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:00.706185Z","time spent":"309.436654ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2563 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911183 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >"}
	{"level":"info","ts":"2024-04-29T00:28:01.13236Z","caller":"traceutil/trace.go:171","msg":"trace[848518735] transaction","detail":"{read_only:false; response_revision:2583; number_of_response:1; }","duration":"106.51056ms","start":"2024-04-29T00:28:01.02575Z","end":"2024-04-29T00:28:01.132261Z","steps":["trace[848518735] 'process raft request'  (duration: 100.002844ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:28:10.719709Z","caller":"traceutil/trace.go:171","msg":"trace[688876790] transaction","detail":"{read_only:false; response_revision:2633; number_of_response:1; }","duration":"131.602022ms","start":"2024-04-29T00:28:10.588085Z","end":"2024-04-29T00:28:10.719687Z","steps":["trace[688876790] 'process raft request'  (duration: 131.335422ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:11.057116Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.908169ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11906267438321687140 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:253b8f272df6da63>","response":"size:41"}
	{"level":"info","ts":"2024-04-29T00:28:11.057309Z","caller":"traceutil/trace.go:171","msg":"trace[730869850] linearizableReadLoop","detail":"{readStateIndex:2894; appliedIndex:2893; }","duration":"310.80146ms","start":"2024-04-29T00:28:10.746493Z","end":"2024-04-29T00:28:11.057294Z","steps":["trace[730869850] 'read index received'  (duration: 118.63939ms)","trace[730869850] 'applied index is now lower than readState.Index'  (duration: 192.16047ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.057392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"310.91436ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:11.057434Z","caller":"traceutil/trace.go:171","msg":"trace[965932074] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2633; }","duration":"310.98056ms","start":"2024-04-29T00:28:10.746443Z","end":"2024-04-29T00:28:11.057424Z","steps":["trace[965932074] 'agreement among raft nodes before linearized reading'  (duration: 310.91126ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:11.057458Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:10.746429Z","time spent":"311.02236ms","remote":"127.0.0.1:52498","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-04-29T00:28:11.057874Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:10.721431Z","time spent":"336.441923ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-04-29T00:28:11.410781Z","caller":"traceutil/trace.go:171","msg":"trace[921900368] linearizableReadLoop","detail":"{readStateIndex:2895; appliedIndex:2894; }","duration":"284.369895ms","start":"2024-04-29T00:28:11.126395Z","end":"2024-04-29T00:28:11.410765Z","steps":["trace[921900368] 'read index received'  (duration: 193.861274ms)","trace[921900368] 'applied index is now lower than readState.Index'  (duration: 90.507421ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.411124Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.711696ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-267500-m03\" ","response":"range_response_count:1 size:2813"}
	{"level":"info","ts":"2024-04-29T00:28:11.41123Z","caller":"traceutil/trace.go:171","msg":"trace[1500780481] range","detail":"{range_begin:/registry/minions/ha-267500-m03; range_end:; response_count:1; response_revision:2634; }","duration":"284.831096ms","start":"2024-04-29T00:28:11.126391Z","end":"2024-04-29T00:28:11.411222Z","steps":["trace[1500780481] 'agreement among raft nodes before linearized reading'  (duration: 284.474795ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:28:11.411809Z","caller":"traceutil/trace.go:171","msg":"trace[1062724437] transaction","detail":"{read_only:false; response_revision:2634; number_of_response:1; }","duration":"351.77576ms","start":"2024-04-29T00:28:11.059046Z","end":"2024-04-29T00:28:11.410821Z","steps":["trace[1062724437] 'process raft request'  (duration: 261.137839ms)","trace[1062724437] 'compare'  (duration: 90.397121ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.412239Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:11.059032Z","time spent":"352.927263ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2582 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911331 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >"}
	{"level":"warn","ts":"2024-04-29T00:28:16.429655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.224744ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:16.429988Z","caller":"traceutil/trace.go:171","msg":"trace[1266991256] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2651; }","duration":"181.540745ms","start":"2024-04-29T00:28:16.248407Z","end":"2024-04-29T00:28:16.429948Z","steps":["trace[1266991256] 'range keys from in-memory index tree'  (duration: 181.210444ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:31:11 up 25 min,  0 users,  load average: 0.55, 0.43, 0.37
	Linux ha-267500 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [31e97721c439] <==
	I0429 00:30:06.553373       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:30:16.570462       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:30:16.570622       1 main.go:227] handling current node
	I0429 00:30:16.570647       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:30:16.571101       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:30:26.577865       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:30:26.578031       1 main.go:227] handling current node
	I0429 00:30:26.578046       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:30:26.578053       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:30:36.591630       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:30:36.592021       1 main.go:227] handling current node
	I0429 00:30:36.592360       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:30:36.592393       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:30:46.606304       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:30:46.606518       1 main.go:227] handling current node
	I0429 00:30:46.606536       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:30:46.606545       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:30:56.623337       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:30:56.623433       1 main.go:227] handling current node
	I0429 00:30:56.623449       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:30:56.623457       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:31:06.629705       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:31:06.629803       1 main.go:227] handling current node
	I0429 00:31:06.629816       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:31:06.629824       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e3f1a76ec8d4] <==
	I0429 00:08:00.626826       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 00:08:01.319490       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0429 00:08:02.484116       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0429 00:08:02.484213       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0429 00:08:02.484272       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.5µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0429 00:08:02.485404       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0429 00:08:02.486881       1 timeout.go:142] post-timeout activity - time-elapsed: 2.861712ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0429 00:08:02.642721       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 00:08:02.684736       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 00:08:02.712741       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 00:08:15.229730       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 00:08:15.308254       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 00:23:49.502033       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49293: use of closed network connection
	E0429 00:23:50.824153       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49301: use of closed network connection
	E0429 00:23:51.986308       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49309: use of closed network connection
	E0429 00:24:25.826543       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49329: use of closed network connection
	E0429 00:24:36.281538       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49332: use of closed network connection
	I0429 00:27:56.312329       1 trace.go:236] Trace[1132022318]: "Update" accept:application/json, */*,audit-id:b430ffa2-60e5-4395-a53d-a8ebd619d367,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (29-Apr-2024 00:27:55.724) (total time: 587ms):
	Trace[1132022318]: ["GuaranteedUpdate etcd3" audit-id:b430ffa2-60e5-4395-a53d-a8ebd619d367,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 587ms (00:27:55.725)
	Trace[1132022318]:  ---"Txn call completed" 586ms (00:27:56.312)]
	Trace[1132022318]: [587.55203ms] [587.55203ms] END
	I0429 00:28:11.413223       1 trace.go:236] Trace[768089845]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.27.226.61,type:*v1.Endpoints,resource:apiServerIPInfo (29-Apr-2024 00:28:10.678) (total time: 734ms):
	Trace[768089845]: ---"Transaction prepared" 338ms (00:28:11.058)
	Trace[768089845]: ---"Txn call completed" 354ms (00:28:11.413)
	Trace[768089845]: [734.530496ms] [734.530496ms] END
	
	
	==> kube-controller-manager [988ba6e93dbd] <==
	I0429 00:08:29.407024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="145.8µs"
	I0429 00:08:29.410999       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.9µs"
	I0429 00:08:29.438715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58µs"
	I0429 00:08:29.463289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.4µs"
	I0429 00:08:30.150197       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 00:08:32.178168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.562718ms"
	I0429 00:08:32.178767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="128.296µs"
	I0429 00:08:32.227761       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.198293ms"
	I0429 00:08:32.228518       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.397µs"
	I0429 00:12:22.804126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.965383ms"
	I0429 00:12:22.823038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.733135ms"
	I0429 00:12:22.823277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.2µs"
	I0429 00:12:22.828995       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.1µs"
	I0429 00:12:22.829468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.999µs"
	I0429 00:12:25.591541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.187606ms"
	I0429 00:12:25.591791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="155.1µs"
	I0429 00:28:02.170352       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-267500-m03\" does not exist"
	I0429 00:28:02.230498       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-267500-m03" podCIDRs=["10.244.1.0/24"]
	I0429 00:28:05.393266       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-267500-m03"
	I0429 00:28:19.456843       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-267500-m03"
	I0429 00:28:19.485470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.1µs"
	I0429 00:28:19.487549       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.6µs"
	I0429 00:28:19.505362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.4µs"
	I0429 00:28:22.722440       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.931424ms"
	I0429 00:28:22.722950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.301µs"
	
	
	==> kube-proxy [b505176bff8d] <==
	I0429 00:08:18.378677       1 server_linux.go:69] "Using iptables proxy"
	I0429 00:08:18.445828       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.226.61"]
	I0429 00:08:18.505105       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:08:18.505147       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:08:18.505201       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:08:18.511281       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:08:18.512271       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:08:18.512309       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:08:18.516363       1 config.go:192] "Starting service config controller"
	I0429 00:08:18.517198       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:08:18.517237       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:08:18.517245       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:08:18.524551       1 config.go:319] "Starting node config controller"
	I0429 00:08:18.524570       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:08:18.618172       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 00:08:18.618299       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:08:18.624657       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8e1e8e3ae83a] <==
	W0429 00:07:59.408672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 00:07:59.409434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 00:07:59.614629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.614883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.614630       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.616141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.671538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.671604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.688105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 00:07:59.688348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 00:07:59.699454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 00:07:59.699500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 00:07:59.827114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 00:07:59.827663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 00:07:59.863569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 00:07:59.864226       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 00:07:59.922434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 00:07:59.922488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 00:07:59.934988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 00:07:59.935206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 00:07:59.935823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 00:07:59.936001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 00:07:59.940321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 00:07:59.940831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0429 00:08:01.614591       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 00:27:02 ha-267500 kubelet[2223]: E0429 00:27:02.769330    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:27:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:27:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:27:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:27:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:28:02 ha-267500 kubelet[2223]: E0429 00:28:02.772180    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:28:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:28:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:28:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:28:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:29:02 ha-267500 kubelet[2223]: E0429 00:29:02.767197    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:29:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:29:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:29:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:29:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:30:02 ha-267500 kubelet[2223]: E0429 00:30:02.771457    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:30:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:30:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:30:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:30:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:31:02 ha-267500 kubelet[2223]: E0429 00:31:02.770173    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:31:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:31:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:31:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:31:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:31:04.237414    7804 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500: (11.5065088s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-267500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-wg44s
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/CopyFile]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-267500 describe pod busybox-fc5497c4f-wg44s
helpers_test.go:282: (dbg) kubectl --context ha-267500 describe pod busybox-fc5497c4f-wg44s:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-wg44s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bv7kl (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-bv7kl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  3m52s (x5 over 19m)   default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  2m55s (x2 over 3m5s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (66.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (94.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-267500 node stop m02 -v=7 --alsologtostderr: (37.2320327s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr: exit status 7 (24.728472s)

                                                
                                                
-- stdout --
	ha-267500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-267500-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-267500-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:32:02.071619   10200 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0428 17:32:02.078854   10200 out.go:291] Setting OutFile to fd 1452 ...
	I0428 17:32:02.079592   10200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:32:02.079592   10200 out.go:304] Setting ErrFile to fd 1060...
	I0428 17:32:02.080169   10200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:32:02.099409   10200 out.go:298] Setting JSON to false
	I0428 17:32:02.099409   10200 mustload.go:65] Loading cluster: ha-267500
	I0428 17:32:02.099409   10200 notify.go:220] Checking for updates...
	I0428 17:32:02.100620   10200 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:32:02.100620   10200 status.go:255] checking status of ha-267500 ...
	I0428 17:32:02.101984   10200 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:32:04.170642   10200 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:32:04.170642   10200 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:32:04.170642   10200 status.go:330] ha-267500 host status = "Running" (err=<nil>)
	I0428 17:32:04.170810   10200 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:32:04.171566   10200 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:32:06.230061   10200 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:32:06.230061   10200 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:32:06.230061   10200 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:32:08.715826   10200 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:32:08.715826   10200 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:32:08.715826   10200 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:32:08.730194   10200 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0428 17:32:08.730194   10200 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:32:10.712105   10200 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:32:10.712105   10200 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:32:10.713148   10200 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:32:13.194754   10200 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:32:13.194815   10200 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:32:13.194815   10200 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:32:13.291709   10200 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.5615074s)
	I0428 17:32:13.306609   10200 ssh_runner.go:195] Run: systemctl --version
	I0428 17:32:13.331365   10200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 17:32:13.359585   10200 kubeconfig.go:125] found "ha-267500" server: "https://172.27.239.254:8443"
	I0428 17:32:13.359585   10200 api_server.go:166] Checking apiserver status ...
	I0428 17:32:13.370446   10200 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 17:32:13.405872   10200 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2086/cgroup
	W0428 17:32:13.428927   10200 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2086/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0428 17:32:13.439911   10200 ssh_runner.go:195] Run: ls
	I0428 17:32:13.448906   10200 api_server.go:253] Checking apiserver healthz at https://172.27.239.254:8443/healthz ...
	I0428 17:32:13.457950   10200 api_server.go:279] https://172.27.239.254:8443/healthz returned 200:
	ok
	I0428 17:32:13.458777   10200 status.go:422] ha-267500 apiserver status = Running (err=<nil>)
	I0428 17:32:13.458838   10200 status.go:257] ha-267500 status: &{Name:ha-267500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0428 17:32:13.458838   10200 status.go:255] checking status of ha-267500-m02 ...
	I0428 17:32:13.459458   10200 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:32:15.431649   10200 main.go:141] libmachine: [stdout =====>] : Off
	
	I0428 17:32:15.431649   10200 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:32:15.431649   10200 status.go:330] ha-267500-m02 host status = "Stopped" (err=<nil>)
	I0428 17:32:15.431649   10200 status.go:343] host is not running, skipping remaining checks
	I0428 17:32:15.431649   10200 status.go:257] ha-267500-m02 status: &{Name:ha-267500-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0428 17:32:15.431649   10200 status.go:255] checking status of ha-267500-m03 ...
	I0428 17:32:15.432362   10200 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m03 ).state
	I0428 17:32:17.485110   10200 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:32:17.485929   10200 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:32:17.485929   10200 status.go:330] ha-267500-m03 host status = "Running" (err=<nil>)
	I0428 17:32:17.485990   10200 host.go:66] Checking if "ha-267500-m03" exists ...
	I0428 17:32:17.486797   10200 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m03 ).state
	I0428 17:32:19.549422   10200 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:32:19.549422   10200 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:32:19.549676   10200 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m03 ).networkadapters[0]).ipaddresses[0]
	I0428 17:32:21.993782   10200 main.go:141] libmachine: [stdout =====>] : 172.27.233.131
	
	I0428 17:32:21.994020   10200 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:32:21.994020   10200 host.go:66] Checking if "ha-267500-m03" exists ...
	I0428 17:32:22.008059   10200 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0428 17:32:22.008059   10200 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m03 ).state
	I0428 17:32:24.016653   10200 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:32:24.016653   10200 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:32:24.017308   10200 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m03 ).networkadapters[0]).ipaddresses[0]
	I0428 17:32:26.510958   10200 main.go:141] libmachine: [stdout =====>] : 172.27.233.131
	
	I0428 17:32:26.510958   10200 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:32:26.511509   10200 sshutil.go:53] new ssh client: &{IP:172.27.233.131 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m03\id_rsa Username:docker}
	I0428 17:32:26.609612   10200 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6015453s)
	I0428 17:32:26.622456   10200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 17:32:26.646850   10200 status.go:257] ha-267500-m03 status: &{Name:ha-267500-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr": ha-267500
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-267500-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-267500-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr": ha-267500
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-267500-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-267500-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr": ha-267500
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-267500-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-267500-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr": ha-267500
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-267500-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-267500-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500: (11.7413057s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-267500 logs -n 25: (8.0820835s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT | 28 Apr 24 17:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT | 28 Apr 24 17:24 PDT |
	|         | busybox-fc5497c4f-5xln2              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-5xln2 -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.224.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-wg44s              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-267500 -v=7                | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:25 PDT | 28 Apr 24 17:28 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	| node    | ha-267500 node stop m02 -v=7         | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:31 PDT | 28 Apr 24 17:32 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 17:05:00
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 17:05:00.635889   15128 out.go:291] Setting OutFile to fd 1448 ...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.636883   15128 out.go:304] Setting ErrFile to fd 980...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.660527   15128 out.go:298] Setting JSON to false
	I0428 17:05:00.664060   15128 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6543,"bootTime":1714342556,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 17:05:00.664060   15128 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 17:05:00.669160   15128 out.go:177] * [ha-267500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 17:05:00.673143   15128 notify.go:220] Checking for updates...
	I0428 17:05:00.675298   15128 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:05:00.677914   15128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 17:05:00.680526   15128 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 17:05:00.682871   15128 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 17:05:00.686326   15128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 17:05:00.689521   15128 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 17:05:05.728109   15128 out.go:177] * Using the hyperv driver based on user configuration
	I0428 17:05:05.733726   15128 start.go:297] selected driver: hyperv
	I0428 17:05:05.733726   15128 start.go:901] validating driver "hyperv" against <nil>
	I0428 17:05:05.733888   15128 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 17:05:05.779166   15128 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 17:05:05.780739   15128 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 17:05:05.780739   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:05:05.780739   15128 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0428 17:05:05.780739   15128 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0428 17:05:05.780739   15128 start.go:340] cluster config:
	{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:05:05.781443   15128 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 17:05:05.786272   15128 out.go:177] * Starting "ha-267500" primary control-plane node in "ha-267500" cluster
	I0428 17:05:05.789365   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:05:05.790343   15128 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 17:05:05.790343   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:05:05.790810   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:05:05.791000   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:05:05.791210   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:05:05.791210   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json: {Name:mk9d04dce876aeea74569e2a12d8158542a180a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:360] acquireMachinesLock for ha-267500: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500"
	I0428 17:05:05.793473   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:05:05.793473   15128 start.go:125] createHost starting for "" (driver="hyperv")
	I0428 17:05:05.798458   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:05:05.798458   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:05:05.799075   15128 client.go:168] LocalClient.Create starting
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:05:07.765342   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:05:07.765366   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:07.765483   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:05:09.466609   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:10.942750   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:14.309202   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:05:14.797607   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: Creating VM...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:17.596457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:17.596534   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:17.596629   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:05:17.596740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:19.370912   15128 main.go:141] libmachine: Creating VHD
	I0428 17:05:19.370912   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:05:22.987163   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6323F08D-1941-41F6-AECD-59FDB38477C4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:05:22.987787   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:22.987787   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:05:22.987950   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:05:22.997062   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:05:26.067081   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:26.067395   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:26.067482   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -SizeBytes 20000MB
	I0428 17:05:28.607147   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-267500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:32.186340   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500 -DynamicMemoryEnabled $false
	I0428 17:05:34.304828   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500 -Count 2
	I0428 17:05:36.364288   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:36.365155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:36.365244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\boot2docker.iso'
	I0428 17:05:38.788294   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd'
	I0428 17:05:41.250474   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: Starting VM...
	I0428 17:05:41.251660   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:48.796976   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:48.797051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:49.812421   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:51.911514   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:51.912240   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:51.912333   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:54.389553   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:54.389603   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:55.396985   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:57.532241   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:59.865311   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:59.865354   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:00.867371   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:06.311485   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:10.915736   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:10.916779   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:10.916848   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:12.945722   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:14.977649   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:17.403860   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:17.413822   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:17.413822   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:06:17.548827   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:06:17.549001   15128 buildroot.go:166] provisioning hostname "ha-267500"
	I0428 17:06:17.549001   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:21.963707   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:21.963891   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:21.969614   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:21.970234   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:21.970287   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500 && echo "ha-267500" | sudo tee /etc/hostname
	I0428 17:06:22.125673   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500
	
	I0428 17:06:22.125673   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:24.116148   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:26.498042   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:26.498298   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:26.504621   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:26.505426   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:26.505426   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:06:26.654593   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:06:26.654745   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:06:26.654745   15128 buildroot.go:174] setting up certificates
	I0428 17:06:26.654878   15128 provision.go:84] configureAuth start
	I0428 17:06:26.654974   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:28.643033   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:31.047712   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:33.032385   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:33.033114   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:33.033244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:35.470487   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:35.470551   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:35.470602   15128 provision.go:143] copyHostCerts
	I0428 17:06:35.470602   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:06:35.470602   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:06:35.470602   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:06:35.471409   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:06:35.472302   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:06:35.472302   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:06:35.474368   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:06:35.475508   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:06:35.477084   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500 san=[127.0.0.1 172.27.226.61 ha-267500 localhost minikube]
	I0428 17:06:35.561808   15128 provision.go:177] copyRemoteCerts
	I0428 17:06:35.577487   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:06:35.577487   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:37.564802   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:40.009619   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:06:40.122812   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5453174s)
	I0428 17:06:40.122812   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:06:40.124516   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:06:40.170921   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:06:40.171551   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0428 17:06:40.219603   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:06:40.219603   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:06:40.266084   15128 provision.go:87] duration metric: took 13.6111193s to configureAuth
	I0428 17:06:40.266084   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:06:40.266857   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:06:40.267021   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:42.241914   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:44.637923   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:44.637923   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:44.637923   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:06:44.774113   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:06:44.774113   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:06:44.774113   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:06:44.774650   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:46.777708   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:46.778317   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:46.778401   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:49.187437   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:49.187970   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:49.188102   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:06:49.338418   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:06:49.339201   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:51.331459   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:53.762358   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:53.763024   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:53.763024   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:06:55.964469   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:06:55.964469   15128 machine.go:97] duration metric: took 43.0186778s to provisionDockerMachine
	I0428 17:06:55.964469   15128 client.go:171] duration metric: took 1m50.1652174s to LocalClient.Create
	I0428 17:06:55.964469   15128 start.go:167] duration metric: took 1m50.1658343s to libmachine.API.Create "ha-267500"
	I0428 17:06:55.965115   15128 start.go:293] postStartSetup for "ha-267500" (driver="hyperv")
	I0428 17:06:55.965216   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:06:55.979546   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:06:55.979546   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:57.968316   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:57.969137   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:57.969264   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:00.415449   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:00.415502   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:00.415502   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:00.529139   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5495858s)
	I0428 17:07:00.542143   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:07:00.550032   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:07:00.550213   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:07:00.550570   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:07:00.551284   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:07:00.551284   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:07:00.565509   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:07:00.584743   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:07:00.629457   15128 start.go:296] duration metric: took 4.6642336s for postStartSetup
	I0428 17:07:00.635014   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:02.626728   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:02.627487   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:02.627874   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:05.092989   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:05.093104   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:05.093386   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:07:05.096398   15128 start.go:128] duration metric: took 1m59.3027333s to createHost
	I0428 17:07:05.096398   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:07.065139   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:07.066155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:07.066393   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:09.551453   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:09.552365   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:09.558305   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:09.559011   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:09.559011   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:07:09.695211   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349229.688972111
	
	I0428 17:07:09.695211   15128 fix.go:216] guest clock: 1714349229.688972111
	I0428 17:07:09.695293   15128 fix.go:229] Guest: 2024-04-28 17:07:09.688972111 -0700 PDT Remote: 2024-04-28 17:07:05.096398 -0700 PDT m=+124.563135001 (delta=4.592574111s)
	I0428 17:07:09.695407   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:11.789797   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:11.789847   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:11.789990   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:14.240619   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:14.240815   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:14.240815   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349229
	I0428 17:07:14.381527   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:07:09 UTC 2024
	
	I0428 17:07:14.381591   15128 fix.go:236] clock set: Mon Apr 29 00:07:09 UTC 2024
	 (err=<nil>)
	I0428 17:07:14.381591   15128 start.go:83] releasing machines lock for "ha-267500", held for 2m8.5881066s
	I0428 17:07:14.381888   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:16.379116   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:18.842518   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:07:18.842698   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:18.852567   15128 ssh_runner.go:195] Run: cat /version.json
	I0428 17:07:18.853571   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.911012   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:20.912913   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.913115   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.913211   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:23.515321   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.515423   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.515870   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.545848   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: cat /version.json: (4.8814384s)
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8914872s)
	I0428 17:07:23.747746   15128 ssh_runner.go:195] Run: systemctl --version
	I0428 17:07:23.771255   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 17:07:23.781524   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:07:23.793701   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:07:23.822613   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:07:23.822613   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:23.822613   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:23.866813   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:07:23.903238   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:07:23.922743   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:07:23.934150   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:07:23.963653   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:23.994818   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:07:24.027248   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:24.060207   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:07:24.094263   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:07:24.140407   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:07:24.173847   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:07:24.204942   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:07:24.241686   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:07:24.271540   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:24.469049   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:07:24.498779   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:24.511314   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:07:24.547731   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.585442   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:07:24.632453   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.665555   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.704256   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:07:24.766295   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.792824   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:24.839067   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:07:24.857950   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:07:24.877113   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:07:24.928235   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:07:25.145493   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:07:25.342459   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:07:25.342632   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:07:25.392872   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:25.606530   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:28.159251   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5517925s)
	I0428 17:07:28.171034   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0428 17:07:28.211210   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.251460   15128 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0428 17:07:28.457673   15128 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0428 17:07:28.655447   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:28.858401   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0428 17:07:28.905418   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.943568   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:29.150079   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0428 17:07:29.264527   15128 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0428 17:07:29.277774   15128 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0428 17:07:29.287734   15128 start.go:562] Will wait 60s for crictl version
	I0428 17:07:29.298726   15128 ssh_runner.go:195] Run: which crictl
	I0428 17:07:29.316760   15128 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 17:07:29.366950   15128 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0428 17:07:29.376977   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.418646   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.453698   15128 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0428 17:07:29.453698   15128 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d9:46:b5 Flags:up|broadcast|multicast|running}
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: fe80::ddcc:c7f1:f829:ae2f/64
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: 172.27.224.1/20
	I0428 17:07:29.473489   15128 ssh_runner.go:195] Run: grep 172.27.224.1	host.minikube.internal$ /etc/hosts
	I0428 17:07:29.479885   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:29.514603   15128 kubeadm.go:877] updating cluster {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 17:07:29.514603   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:07:29.523620   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:29.550369   15128 docker.go:685] Got preloaded images: 
	I0428 17:07:29.550483   15128 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0428 17:07:29.562702   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:29.593952   15128 ssh_runner.go:195] Run: which lz4
	I0428 17:07:29.600117   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0428 17:07:29.613555   15128 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0428 17:07:29.619890   15128 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0428 17:07:29.619890   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0428 17:07:31.519069   15128 docker.go:649] duration metric: took 1.9189486s to copy over tarball
	I0428 17:07:31.533069   15128 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0428 17:07:40.472773   15128 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9396898s)
	I0428 17:07:40.472925   15128 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0428 17:07:40.541351   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:40.567273   15128 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0428 17:07:40.619221   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:40.837523   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:44.196770   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3592418s)
	I0428 17:07:44.207767   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:44.237423   15128 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0428 17:07:44.237484   15128 cache_images.go:84] Images are preloaded, skipping loading
	I0428 17:07:44.237484   15128 kubeadm.go:928] updating node { 172.27.226.61 8443 v1.30.0 docker true true} ...
	I0428 17:07:44.237484   15128 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-267500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.226.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 17:07:44.246763   15128 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0428 17:07:44.282127   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:07:44.282216   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:07:44.282216   15128 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 17:07:44.282351   15128 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.226.61 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-267500 NodeName:ha-267500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.226.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.226.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 17:07:44.282455   15128 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.226.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-267500"
	  kubeletExtraArgs:
	    node-ip: 172.27.226.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.226.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 17:07:44.282455   15128 kube-vip.go:111] generating kube-vip config ...
	I0428 17:07:44.297487   15128 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0428 17:07:44.321501   15128 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0428 17:07:44.322489   15128 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0428 17:07:44.337281   15128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 17:07:44.356448   15128 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 17:07:44.368828   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0428 17:07:44.388733   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0428 17:07:44.419285   15128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 17:07:44.454529   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0428 17:07:44.492910   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0428 17:07:44.535119   15128 ssh_runner.go:195] Run: grep 172.27.239.254	control-plane.minikube.internal$ /etc/hosts
	I0428 17:07:44.544353   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:44.584071   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:44.784658   15128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 17:07:44.813138   15128 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500 for IP: 172.27.226.61
	I0428 17:07:44.813138   15128 certs.go:194] generating shared ca certs ...
	I0428 17:07:44.813138   15128 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:44.814022   15128 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0428 17:07:44.814402   15128 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0428 17:07:44.814630   15128 certs.go:256] generating profile certs ...
	I0428 17:07:44.815376   15128 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key
	I0428 17:07:44.815452   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt with IP's: []
	I0428 17:07:45.207682   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt ...
	I0428 17:07:45.207682   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt: {Name:mkad69168dad75f83e0efa34e0b67056be851f25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.209661   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key ...
	I0428 17:07:45.209661   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key: {Name:mkb880ba41d02f89477ac0bc036a3238bb214c19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.210642   15128 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3
	I0428 17:07:45.211691   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.226.61 172.27.239.254]
	I0428 17:07:45.272240   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 ...
	I0428 17:07:45.272240   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3: {Name:mk99fb8942eac42f7e59971118a5e983aa693542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.273362   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 ...
	I0428 17:07:45.273362   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3: {Name:mkdcebf54b68db40ea28398d3bc9d7030e2380c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.274711   15128 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt
	I0428 17:07:45.286842   15128 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key
	I0428 17:07:45.287930   15128 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key
	I0428 17:07:45.288916   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt with IP's: []
	I0428 17:07:45.392345   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt ...
	I0428 17:07:45.392345   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt: {Name:mk043c6e778c0a46cac3b2815bc508f265aae077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.394630   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key ...
	I0428 17:07:45.394630   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key: {Name:mk9cbeba2bc7745cd3561dc98b61ab1be7e0e2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.395971   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 17:07:45.396701   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 17:07:45.396840   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 17:07:45.396982   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 17:07:45.397123   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 17:07:45.404414   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 17:07:45.405312   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem (1338 bytes)
	W0428 17:07:45.405975   15128 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228_empty.pem, impossibly tiny 0 bytes
	I0428 17:07:45.406015   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0428 17:07:45.406268   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0428 17:07:45.406623   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0428 17:07:45.406886   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0428 17:07:45.407157   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem (1708 bytes)
	I0428 17:07:45.407157   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /usr/share/ca-certificates/32282.pem
	I0428 17:07:45.407872   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:45.408049   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem -> /usr/share/ca-certificates/3228.pem
	I0428 17:07:45.408290   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 17:07:45.465598   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0428 17:07:45.514624   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 17:07:45.563309   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 17:07:45.610689   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0428 17:07:45.668205   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0428 17:07:45.709224   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 17:07:45.760227   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0428 17:07:45.808948   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /usr/share/ca-certificates/32282.pem (1708 bytes)
	I0428 17:07:45.867908   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 17:07:45.915616   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem --> /usr/share/ca-certificates/3228.pem (1338 bytes)
	I0428 17:07:45.964791   15128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 17:07:46.023214   15128 ssh_runner.go:195] Run: openssl version
	I0428 17:07:46.048823   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32282.pem && ln -fs /usr/share/ca-certificates/32282.pem /etc/ssl/certs/32282.pem"
	I0428 17:07:46.088573   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.097176   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.109096   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.132635   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32282.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 17:07:46.166258   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 17:07:46.204585   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.212881   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.228291   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.251359   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 17:07:46.286250   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3228.pem && ln -fs /usr/share/ca-certificates/3228.pem /etc/ssl/certs/3228.pem"
	I0428 17:07:46.330437   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.337213   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.348616   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.369695   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3228.pem /etc/ssl/certs/51391683.0"
	I0428 17:07:46.404629   15128 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 17:07:46.416103   15128 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 17:07:46.416103   15128 kubeadm.go:391] StartCluster: {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:07:46.427776   15128 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 17:07:46.462126   15128 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 17:07:46.492998   15128 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 17:07:46.525017   15128 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 17:07:46.543389   15128 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 17:07:46.543449   15128 kubeadm.go:156] found existing configuration files:
	
	I0428 17:07:46.559558   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 17:07:46.576906   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 17:07:46.591547   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 17:07:46.622617   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 17:07:46.643274   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 17:07:46.657479   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 17:07:46.687575   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.704724   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 17:07:46.717169   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.749254   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 17:07:46.767247   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 17:07:46.779268   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 17:07:46.798138   15128 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0428 17:07:47.295492   15128 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0428 17:08:03.206037   15128 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0428 17:08:03.206217   15128 kubeadm.go:309] [preflight] Running pre-flight checks
	I0428 17:08:03.206547   15128 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0428 17:08:03.206720   15128 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0428 17:08:03.207017   15128 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0428 17:08:03.207166   15128 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 17:08:03.211078   15128 out.go:204]   - Generating certificates and keys ...
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0428 17:08:03.212047   15128 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0428 17:08:03.212253   15128 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0428 17:08:03.212452   15128 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.213396   15128 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 17:08:03.214403   15128 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 17:08:03.214647   15128 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 17:08:03.217496   15128 out.go:204]   - Booting up control plane ...
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 17:08:03.218523   15128 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 17:08:03.218673   15128 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0428 17:08:03.218845   15128 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002004724s
	I0428 17:08:03.219380   15128 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0428 17:08:03.219512   15128 kubeadm.go:309] [api-check] The API server is healthy after 9.018382318s
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0428 17:08:03.219547   15128 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0428 17:08:03.219547   15128 kubeadm.go:309] [mark-control-plane] Marking the node ha-267500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0428 17:08:03.219547   15128 kubeadm.go:309] [bootstrap-token] Using token: o2t0fz.gqoxv8rhmbtgnafl
	I0428 17:08:03.222077   15128 out.go:204]   - Configuring RBAC rules ...
	I0428 17:08:03.223255   15128 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0428 17:08:03.223390   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0428 17:08:03.223700   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0428 17:08:03.224022   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0428 17:08:03.224356   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0428 17:08:03.224673   15128 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0428 17:08:03.224822   15128 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0428 17:08:03.224822   15128 kubeadm.go:309] 
	I0428 17:08:03.224822   15128 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0428 17:08:03.225393   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.226084   15128 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0428 17:08:03.226084   15128 kubeadm.go:309] 
	I0428 17:08:03.226252   15128 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0428 17:08:03.226279   15128 kubeadm.go:309] 
	I0428 17:08:03.226368   15128 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0428 17:08:03.226368   15128 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0428 17:08:03.226368   15128 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0428 17:08:03.226368   15128 kubeadm.go:309] 
	I0428 17:08:03.226941   15128 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0428 17:08:03.227102   15128 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0428 17:08:03.227102   15128 kubeadm.go:309] 
	I0428 17:08:03.227370   15128 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--control-plane 
	I0428 17:08:03.227509   15128 kubeadm.go:309] 
	I0428 17:08:03.227814   15128 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0428 17:08:03.227814   15128 kubeadm.go:309] 
	I0428 17:08:03.228020   15128 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.228020   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c 
	I0428 17:08:03.228020   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:08:03.228020   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:08:03.230920   15128 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0428 17:08:03.245586   15128 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0428 17:08:03.254991   15128 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0428 17:08:03.255049   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0428 17:08:03.307618   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0428 17:08:04.087321   15128 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 17:08:04.101185   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-267500 minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=ha-267500 minikube.k8s.io/primary=true
	I0428 17:08:04.110392   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.127454   15128 ops.go:34] apiserver oom_adj: -16
	I0428 17:08:04.338961   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.853452   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.339051   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.843300   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.345394   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.842588   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.347466   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.845426   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.343954   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.844666   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.346016   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.847106   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.346157   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.852073   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.350599   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.851124   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.339498   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.839469   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.341674   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.844363   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.340478   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.840892   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.351020   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.542789   15128 kubeadm.go:1107] duration metric: took 11.4553488s to wait for elevateKubeSystemPrivileges
	W0428 17:08:15.542884   15128 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0428 17:08:15.542948   15128 kubeadm.go:393] duration metric: took 29.1267984s to StartCluster
	I0428 17:08:15.542948   15128 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.543147   15128 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:15.545087   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.546714   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0428 17:08:15.546792   15128 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:15.546862   15128 start.go:240] waiting for startup goroutines ...
	I0428 17:08:15.546921   15128 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0428 17:08:15.547043   15128 addons.go:69] Setting storage-provisioner=true in profile "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:234] Setting addon storage-provisioner=true in "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:69] Setting default-storageclass=true in profile "ha-267500"
	I0428 17:08:15.547186   15128 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-267500"
	I0428 17:08:15.547186   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:15.547418   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.760123   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0428 17:08:16.117515   15128 start.go:946] {"host.minikube.internal": 172.27.224.1} host record injected into CoreDNS's ConfigMap
	I0428 17:08:17.727218   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.731020   15128 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 17:08:17.728718   15128 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:17.731866   15128 kapi.go:59] client config for ha-267500: &rest.Config{Host:"https://172.27.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 17:08:17.733765   15128 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:17.733849   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0428 17:08:17.733849   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:17.735131   15128 cert_rotation.go:137] Starting client certificate rotation controller
	I0428 17:08:17.735131   15128 addons.go:234] Setting addon default-storageclass=true in "ha-267500"
	I0428 17:08:17.735756   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:17.736495   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.022150   15128 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:20.022150   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.024648   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.176019   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:22.176993   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.177104   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.649653   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:22.838833   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:23.942043   15128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1032083s)
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:24.736869   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:24.878922   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:25.036824   15128 round_trippers.go:463] GET https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0428 17:08:25.036824   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.036824   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.036824   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.047850   15128 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0428 17:08:25.050270   15128 round_trippers.go:463] PUT https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0428 17:08:25.050270   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Content-Type: application/json
	I0428 17:08:25.050270   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.054895   15128 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 17:08:25.058644   15128 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0428 17:08:25.062323   15128 addons.go:505] duration metric: took 9.5154456s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0428 17:08:25.062323   15128 start.go:245] waiting for cluster config update ...
	I0428 17:08:25.062323   15128 start.go:254] writing updated cluster config ...
	I0428 17:08:25.064855   15128 out.go:177] 
	I0428 17:08:25.074876   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:25.074876   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.081680   15128 out.go:177] * Starting "ha-267500-m02" control-plane node in "ha-267500" cluster
	I0428 17:08:25.084831   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:08:25.084949   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:08:25.085245   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:08:25.085467   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:08:25.085668   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.089909   15128 start.go:360] acquireMachinesLock for ha-267500-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:08:25.089909   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500-m02"
	I0428 17:08:25.089909   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:25.089909   15128 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0428 17:08:25.092669   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:08:25.092669   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:08:25.092669   15128 client.go:168] LocalClient.Create starting
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:08:26.932082   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:08:26.932249   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:26.932469   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:08:28.625007   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:08:28.625741   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:28.625836   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:30.145128   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:30.145193   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:30.145352   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:33.641047   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:33.641341   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:33.643919   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:08:34.107074   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:08:34.283136   15128 main.go:141] libmachine: Creating VM...
	I0428 17:08:34.284168   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:37.085226   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:37.085497   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:38.799740   15128 main.go:141] libmachine: Creating VHD
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1C4811B2-F108-4C17-8C85-240087500FFB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:08:42.443176   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:08:45.530814   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -SizeBytes 20000MB
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:08:51.507051   15128 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-267500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:08:51.507121   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:51.507184   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500-m02 -DynamicMemoryEnabled $false
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:53.623959   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500-m02 -Count 2
	I0428 17:08:55.746706   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:55.747282   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:55.747376   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\boot2docker.iso'
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:58.231298   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd'
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: Starting VM...
	I0428 17:09:00.819246   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500-m02
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:08.535107   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:08.535676   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:09.540110   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:11.730252   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:11.730767   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:11.730896   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:14.267320   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:14.267920   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:15.278102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:17.429662   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:19.872667   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:19.873239   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:20.874059   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:23.049283   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:25.483021   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:25.483840   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:26.497330   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:28.593193   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:31.092830   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:33.155893   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:33.156190   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:33.156190   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:09:33.156343   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:37.708958   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:37.709094   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:37.715262   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:37.715453   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:37.715453   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:09:37.838307   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:09:37.838307   15128 buildroot.go:166] provisioning hostname "ha-267500-m02"
	I0428 17:09:37.838307   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:39.845337   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:39.845507   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:39.845582   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:42.372033   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:42.372654   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:42.379934   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:42.380083   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:42.380083   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500-m02 && echo "ha-267500-m02" | sudo tee /etc/hostname
	I0428 17:09:42.534583   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500-m02
	
	I0428 17:09:42.534727   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:44.674240   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:47.257595   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:47.258189   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:47.258189   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:09:47.404787   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:09:47.404787   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:09:47.404787   15128 buildroot.go:174] setting up certificates
	I0428 17:09:47.404787   15128 provision.go:84] configureAuth start
	I0428 17:09:47.404787   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:51.875853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:53.926853   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:53.927030   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:53.927102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:56.411706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:56.412682   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:56.412682   15128 provision.go:143] copyHostCerts
	I0428 17:09:56.412881   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:09:56.413201   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:09:56.413201   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:09:56.413699   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:09:56.414916   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:09:56.415172   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:09:56.417043   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:09:56.417043   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:09:56.417043   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:09:56.417691   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:09:56.418448   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500-m02 san=[127.0.0.1 172.27.238.86 ha-267500-m02 localhost minikube]
	I0428 17:09:56.698158   15128 provision.go:177] copyRemoteCerts
	I0428 17:09:56.713232   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:09:56.713232   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:58.727438   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:58.728437   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:58.728572   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:01.200219   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:01.303703   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5904121s)
	I0428 17:10:01.303703   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:10:01.304216   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:10:01.351115   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:10:01.351613   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0428 17:10:01.399941   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:10:01.400279   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:10:01.447643   15128 provision.go:87] duration metric: took 14.0428334s to configureAuth
	I0428 17:10:01.447643   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:10:01.448198   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:10:01.448388   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:03.470041   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:05.925618   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:05.926194   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:05.926194   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:10:06.056503   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:10:06.056605   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:10:06.056795   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:10:06.056855   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:08.084596   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:10.593844   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:10.594210   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:10.600708   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:10.601470   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:10.601470   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.226.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:10:10.751881   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.226.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:10:10.751947   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:12.904363   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:15.479691   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:15.479915   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:15.486849   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:15.487030   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:15.487030   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:10:17.663081   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:10:17.663081   15128 machine.go:97] duration metric: took 44.506824s to provisionDockerMachine
	I0428 17:10:17.663081   15128 client.go:171] duration metric: took 1m52.570239s to LocalClient.Create
	I0428 17:10:17.663081   15128 start.go:167] duration metric: took 1m52.570239s to libmachine.API.Create "ha-267500"
	I0428 17:10:17.663081   15128 start.go:293] postStartSetup for "ha-267500-m02" (driver="hyperv")
	I0428 17:10:17.663081   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:10:17.677002   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:10:17.677002   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:19.758853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:22.318985   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:22.423330   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7463207s)
	I0428 17:10:22.436053   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:10:22.443505   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:10:22.443505   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:10:22.444052   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:10:22.445207   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:10:22.445207   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:10:22.458722   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:10:22.477786   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:10:22.526087   15128 start.go:296] duration metric: took 4.8629979s for postStartSetup
	I0428 17:10:22.528901   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:27.084100   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:10:27.086385   15128 start.go:128] duration metric: took 2m1.9962875s to createHost
	I0428 17:10:27.086385   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:29.131174   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:31.572065   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:31.572369   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:31.578077   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:31.578656   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:31.578656   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349431.710726684
	
	I0428 17:10:31.707789   15128 fix.go:216] guest clock: 1714349431.710726684
	I0428 17:10:31.707789   15128 fix.go:229] Guest: 2024-04-28 17:10:31.710726684 -0700 PDT Remote: 2024-04-28 17:10:27.0863856 -0700 PDT m=+326.552805801 (delta=4.624341084s)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:36.218864   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:36.219399   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:36.219663   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349431
	I0428 17:10:36.353520   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:10:31 UTC 2024
	
	I0428 17:10:36.353602   15128 fix.go:236] clock set: Mon Apr 29 00:10:31 UTC 2024
	 (err=<nil>)
	I0428 17:10:36.353602   15128 start.go:83] releasing machines lock for "ha-267500-m02", held for 2m11.26349s
	I0428 17:10:36.353795   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:38.401891   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:40.883767   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:40.883929   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:40.887007   15128 out.go:177] * Found network options:
	I0428 17:10:40.889514   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.892316   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.894427   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.897007   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 17:10:40.898142   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.900035   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:10:40.900035   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:40.912127   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 17:10:40.913152   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:43.021173   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.602076   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.622078   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.622258   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.622506   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.694842   15128 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7816825s)
	W0428 17:10:45.694980   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:10:45.707857   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:10:45.811368   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:10:45.811368   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:45.811368   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.911325s)
	I0428 17:10:45.811813   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:45.869634   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:10:45.905032   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:10:45.930324   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:10:45.946027   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:10:45.978279   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.013710   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:10:46.061695   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.102008   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:10:46.135573   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:10:46.171642   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:10:46.204807   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:10:46.239021   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:10:46.271655   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:10:46.306942   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:46.514038   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:10:46.544941   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:46.560491   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:10:46.605547   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.654104   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:10:46.708544   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.748048   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.784762   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:10:46.849187   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.873497   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:46.927545   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:10:46.944545   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:10:46.962213   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:10:47.010730   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:10:47.237397   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:10:47.429784   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:10:47.429870   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:10:47.474822   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:47.662962   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:11:48.797471   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1344114s)
	I0428 17:11:48.811984   15128 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0428 17:11:48.846867   15128 out.go:177] 
	W0428 17:11:48.851004   15128 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 00:10:16 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.119534579Z" level=info msg="Starting up"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.120740894Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.121661806Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.164120251Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189883081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189945482Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190009182Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190026683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190220685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190263486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190520589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190669591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190716191Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190728492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190839193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.191192898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194247737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194367638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194558841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194663742Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194795944Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195368451Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195462552Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220446573Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220530874Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220815977Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220940379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220961379Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221231583Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221822990Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222033793Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222143394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222181895Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222200695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222229595Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222251396Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222320897Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222367097Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222383497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222398798Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222414398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222438198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222458898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222474399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222508799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222524499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222540899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222555500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222572000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222588200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222612300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222628301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222643801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222659801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222679401Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222703802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222745302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222782703Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222911604Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222975905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222992605Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223005105Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223156807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223197908Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223212708Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229340687Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229588390Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.230467901Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.231131810Z" level=info msg="containerd successfully booted in 0.070317s"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.196765446Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.225741894Z" level=info msg="Loading containers: start."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.520224287Z" level=info msg="Loading containers: done."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.548826467Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.549157372Z" level=info msg="Daemon has completed initialization"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663745997Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663852398Z" level=info msg="API listen on [::]:2376"
	Apr 29 00:10:17 ha-267500-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 00:10:47 ha-267500-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.694032846Z" level=info msg="Processing signal 'terminated'"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696514258Z" level=info msg="Daemon shutdown complete"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696708859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696755859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696775959Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:48 ha-267500-m02 dockerd[1016]: time="2024-04-29T00:10:48.770678285Z" level=info msg="Starting up"
	Apr 29 00:11:48 ha-267500-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0428 17:11:48.851004   15128 out.go:239] * 
	W0428 17:11:48.852842   15128 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 17:11:48.855427   15128 out.go:177] 
	
	
	==> Docker <==
	Apr 29 00:24:56 ha-267500 dockerd[1316]: 2024/04/29 00:24:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:14 ha-267500 dockerd[1316]: 2024/04/29 00:29:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:15 ha-267500 dockerd[1316]: 2024/04/29 00:29:15 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:29:15 ha-267500 dockerd[1316]: 2024/04/29 00:29:15 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8d1eabc40263       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago      Running             busybox                   0                   9e5d506c62d64       busybox-fc5497c4f-5xln2
	863860b786b42       cbb01a7bd410d                                                                                         24 minutes ago      Running             coredns                   0                   c1f590ad490fe       coredns-7db6d8ff4d-p7tjz
	f85260746d557       cbb01a7bd410d                                                                                         24 minutes ago      Running             coredns                   0                   586f91a6b0d3d       coredns-7db6d8ff4d-2d6ct
	f23ff280b691c       6e38f40d628db                                                                                         24 minutes ago      Running             storage-provisioner       0                   4f7c6837c24bd       storage-provisioner
	31e97721c439f       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              24 minutes ago      Running             kindnet-cni               0                   9a810f16fad2b       kindnet-6pr2b
	b505176bff8dd       a0bf559e280cf                                                                                         24 minutes ago      Running             kube-proxy                0                   f041e2ebf6955       kube-proxy-59kz7
	e8de8cc5d0941       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     24 minutes ago      Running             kube-vip                  0                   5e6adedaca2d1       kube-vip-ha-267500
	1bb77467f58fc       3861cfcd7c04c                                                                                         24 minutes ago      Running             etcd                      0                   bd2f63e7ff884       etcd-ha-267500
	e3f1a76ec8d43       c42f13656d0b2                                                                                         24 minutes ago      Running             kube-apiserver            0                   1aac39df0e147       kube-apiserver-ha-267500
	8e1e8e3ae83a4       259c8277fcbbc                                                                                         24 minutes ago      Running             kube-scheduler            0                   59e9e09e1fe2e       kube-scheduler-ha-267500
	988ba6e93dbd2       c7aad43836fa5                                                                                         24 minutes ago      Running             kube-controller-manager   0                   b062edd237fa4       kube-controller-manager-ha-267500
	
	
	==> coredns [863860b786b4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56042 - 38920 "HINFO IN 6310058863699759000.886894576477842994. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026858243s
	[INFO] 10.244.0.4:52183 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.109912239s
	[INFO] 10.244.0.4:36966 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.019781143s
	[INFO] 10.244.0.4:50436 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.124688347s
	[INFO] 10.244.0.4:39307 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231401s
	[INFO] 10.244.0.4:48774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000438101s
	[INFO] 10.244.0.4:55657 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001919s
	[INFO] 10.244.0.4:39536 - 5 "PTR IN 1.224.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000243301s
	
	
	==> coredns [f85260746d55] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52661 - 10332 "HINFO IN 6890724632724915343.2842102422429648823. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049972505s
	[INFO] 10.244.0.4:36002 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189801s
	[INFO] 10.244.0.4:39517 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002061s
	[INFO] 10.244.0.4:58443 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.132665688s
	[INFO] 10.244.0.4:58628 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000428701s
	[INFO] 10.244.0.4:35412 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002027s
	[INFO] 10.244.0.4:55943 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.02269265s
	[INFO] 10.244.0.4:41245 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000423501s
	[INFO] 10.244.0.4:57855 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168501s
	[INFO] 10.244.0.4:59251 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000973s
	[INFO] 10.244.0.4:49224 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000193501s
	[INFO] 10.244.0.4:39630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002705s
	[INFO] 10.244.0.4:33915 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000299901s
	[INFO] 10.244.0.4:44933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000954s
	
	
	==> describe nodes <==
	Name:               ha-267500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-267500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-267500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:08:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-267500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:32:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:27:57 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:27:57 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:27:57 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:27:57 +0000   Mon, 29 Apr 2024 00:08:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.226.61
	  Hostname:    ha-267500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 077cacd754b64c3dad0beeef28749850
	  System UUID:                961ce819-6c1b-c24a-99df-3205dca32605
	  Boot ID:                    bb08693c-1f82-4307-a58c-bdcce00f2d7a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5xln2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-7db6d8ff4d-2d6ct             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	  kube-system                 coredns-7db6d8ff4d-p7tjz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	  kube-system                 etcd-ha-267500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kindnet-6pr2b                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-apiserver-ha-267500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-ha-267500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-59kz7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-ha-267500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-vip-ha-267500                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24m   kube-proxy       
	  Normal  NodeHasSufficientMemory  24m   kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  Starting                 24m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24m   kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m   kubelet          Node ha-267500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m   kubelet          Node ha-267500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m   node-controller  Node ha-267500 event: Registered Node ha-267500 in Controller
	  Normal  NodeReady                24m   kubelet          Node ha-267500 status is now: NodeReady
	
	
	Name:               ha-267500-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-267500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-267500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_28T17_28_02_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:28:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-267500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:32:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.233.131
	  Hostname:    ha-267500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0562d38ef374b73969ab15fed947e11
	  System UUID:                c94a104a-b670-854e-ac89-f41b3533cc69
	  Boot ID:                    bca10429-bddd-4547-8fb0-c50d93740969
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jxx6x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kindnet-mspbr              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m44s
	  kube-system                 kube-proxy-jcph5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m34s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m44s (x2 over 4m44s)  kubelet          Node ha-267500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m44s (x2 over 4m44s)  kubelet          Node ha-267500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m44s (x2 over 4m44s)  kubelet          Node ha-267500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m41s                  node-controller  Node ha-267500-m03 event: Registered Node ha-267500-m03 in Controller
	  Normal  NodeReady                4m27s                  kubelet          Node ha-267500-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr29 00:06] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.760915] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.419480] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.183676] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[Apr29 00:07] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.112445] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.557599] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.220083] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.252325] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.857578] systemd-fstab-generator[1178]: Ignoring "noauto" option for root device
	[  +0.206645] systemd-fstab-generator[1190]: Ignoring "noauto" option for root device
	[  +0.195057] systemd-fstab-generator[1202]: Ignoring "noauto" option for root device
	[  +0.281554] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[ +11.671296] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.127733] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.851029] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +6.965698] systemd-fstab-generator[1723]: Ignoring "noauto" option for root device
	[  +0.101314] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.149606] kauditd_printk_skb: 67 callbacks suppressed
	[Apr29 00:08] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[ +14.798165] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.098725] kauditd_printk_skb: 29 callbacks suppressed
	[Apr29 00:12] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [1bb77467f58f] <==
	{"level":"info","ts":"2024-04-29T00:27:56.685502Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2039}
	{"level":"info","ts":"2024-04-29T00:27:56.697671Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2039,"took":"11.701828ms","hash":3710382387,"current-db-size-bytes":2490368,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1806336,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-29T00:27:56.697806Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3710382387,"revision":2039,"compact-revision":1501}
	{"level":"warn","ts":"2024-04-29T00:28:01.015001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.747946ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11906267438321686993 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2563 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911183 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T00:28:01.015171Z","caller":"traceutil/trace.go:171","msg":"trace[1066403778] linearizableReadLoop","detail":"{readStateIndex:2839; appliedIndex:2838; }","duration":"166.002504ms","start":"2024-04-29T00:28:00.849156Z","end":"2024-04-29T00:28:01.015158Z","steps":["trace[1066403778] 'read index received'  (duration: 64.927058ms)","trace[1066403778] 'applied index is now lower than readState.Index'  (duration: 101.074346ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T00:28:01.015567Z","caller":"traceutil/trace.go:171","msg":"trace[1444860087] transaction","detail":"{read_only:false; response_revision:2582; number_of_response:1; }","duration":"309.347954ms","start":"2024-04-29T00:28:00.706202Z","end":"2024-04-29T00:28:01.01555Z","steps":["trace[1444860087] 'process raft request'  (duration: 207.946307ms)","trace[1444860087] 'compare'  (duration: 100.659345ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:01.015577Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.380105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/kube-system/bootstrap-token-46antb\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:01.015843Z","caller":"traceutil/trace.go:171","msg":"trace[1574012410] range","detail":"{range_begin:/registry/secrets/kube-system/bootstrap-token-46antb; range_end:; response_count:0; response_revision:2582; }","duration":"166.706906ms","start":"2024-04-29T00:28:00.849128Z","end":"2024-04-29T00:28:01.015834Z","steps":["trace[1574012410] 'agreement among raft nodes before linearized reading'  (duration: 166.065204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:01.015715Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:00.706185Z","time spent":"309.436654ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2563 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911183 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >"}
	{"level":"info","ts":"2024-04-29T00:28:01.13236Z","caller":"traceutil/trace.go:171","msg":"trace[848518735] transaction","detail":"{read_only:false; response_revision:2583; number_of_response:1; }","duration":"106.51056ms","start":"2024-04-29T00:28:01.02575Z","end":"2024-04-29T00:28:01.132261Z","steps":["trace[848518735] 'process raft request'  (duration: 100.002844ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:28:10.719709Z","caller":"traceutil/trace.go:171","msg":"trace[688876790] transaction","detail":"{read_only:false; response_revision:2633; number_of_response:1; }","duration":"131.602022ms","start":"2024-04-29T00:28:10.588085Z","end":"2024-04-29T00:28:10.719687Z","steps":["trace[688876790] 'process raft request'  (duration: 131.335422ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:11.057116Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.908169ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11906267438321687140 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:253b8f272df6da63>","response":"size:41"}
	{"level":"info","ts":"2024-04-29T00:28:11.057309Z","caller":"traceutil/trace.go:171","msg":"trace[730869850] linearizableReadLoop","detail":"{readStateIndex:2894; appliedIndex:2893; }","duration":"310.80146ms","start":"2024-04-29T00:28:10.746493Z","end":"2024-04-29T00:28:11.057294Z","steps":["trace[730869850] 'read index received'  (duration: 118.63939ms)","trace[730869850] 'applied index is now lower than readState.Index'  (duration: 192.16047ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.057392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"310.91436ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:11.057434Z","caller":"traceutil/trace.go:171","msg":"trace[965932074] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2633; }","duration":"310.98056ms","start":"2024-04-29T00:28:10.746443Z","end":"2024-04-29T00:28:11.057424Z","steps":["trace[965932074] 'agreement among raft nodes before linearized reading'  (duration: 310.91126ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:11.057458Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:10.746429Z","time spent":"311.02236ms","remote":"127.0.0.1:52498","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-04-29T00:28:11.057874Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:10.721431Z","time spent":"336.441923ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-04-29T00:28:11.410781Z","caller":"traceutil/trace.go:171","msg":"trace[921900368] linearizableReadLoop","detail":"{readStateIndex:2895; appliedIndex:2894; }","duration":"284.369895ms","start":"2024-04-29T00:28:11.126395Z","end":"2024-04-29T00:28:11.410765Z","steps":["trace[921900368] 'read index received'  (duration: 193.861274ms)","trace[921900368] 'applied index is now lower than readState.Index'  (duration: 90.507421ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.411124Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.711696ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-267500-m03\" ","response":"range_response_count:1 size:2813"}
	{"level":"info","ts":"2024-04-29T00:28:11.41123Z","caller":"traceutil/trace.go:171","msg":"trace[1500780481] range","detail":"{range_begin:/registry/minions/ha-267500-m03; range_end:; response_count:1; response_revision:2634; }","duration":"284.831096ms","start":"2024-04-29T00:28:11.126391Z","end":"2024-04-29T00:28:11.411222Z","steps":["trace[1500780481] 'agreement among raft nodes before linearized reading'  (duration: 284.474795ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:28:11.411809Z","caller":"traceutil/trace.go:171","msg":"trace[1062724437] transaction","detail":"{read_only:false; response_revision:2634; number_of_response:1; }","duration":"351.77576ms","start":"2024-04-29T00:28:11.059046Z","end":"2024-04-29T00:28:11.410821Z","steps":["trace[1062724437] 'process raft request'  (duration: 261.137839ms)","trace[1062724437] 'compare'  (duration: 90.397121ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.412239Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:11.059032Z","time spent":"352.927263ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2582 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911331 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >"}
	{"level":"warn","ts":"2024-04-29T00:28:16.429655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.224744ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:16.429988Z","caller":"traceutil/trace.go:171","msg":"trace[1266991256] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2651; }","duration":"181.540745ms","start":"2024-04-29T00:28:16.248407Z","end":"2024-04-29T00:28:16.429948Z","steps":["trace[1266991256] 'range keys from in-memory index tree'  (duration: 181.210444ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:31:58.46072Z","caller":"traceutil/trace.go:171","msg":"trace[1217074224] transaction","detail":"{read_only:false; response_revision:3091; number_of_response:1; }","duration":"104.985672ms","start":"2024-04-29T00:31:58.355715Z","end":"2024-04-29T00:31:58.460701Z","steps":["trace[1217074224] 'process raft request'  (duration: 67.769676ms)","trace[1217074224] 'compare'  (duration: 36.776895ms)"],"step_count":2}
	
	
	==> kernel <==
	 00:32:46 up 26 min,  0 users,  load average: 0.43, 0.40, 0.36
	Linux ha-267500 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [31e97721c439] <==
	I0429 00:31:36.657540       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:31:46.672636       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:31:46.672682       1 main.go:227] handling current node
	I0429 00:31:46.672693       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:31:46.672699       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:31:56.689270       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:31:56.689432       1 main.go:227] handling current node
	I0429 00:31:56.689509       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:31:56.689722       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:32:06.699696       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:32:06.699799       1 main.go:227] handling current node
	I0429 00:32:06.699843       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:32:06.699855       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:32:16.715578       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:32:16.715697       1 main.go:227] handling current node
	I0429 00:32:16.715712       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:32:16.715720       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:32:26.723571       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:32:26.723670       1 main.go:227] handling current node
	I0429 00:32:26.723685       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:32:26.723693       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:32:36.731258       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:32:36.731314       1 main.go:227] handling current node
	I0429 00:32:36.731327       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:32:36.731335       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e3f1a76ec8d4] <==
	I0429 00:08:00.626826       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 00:08:01.319490       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0429 00:08:02.484116       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0429 00:08:02.484213       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0429 00:08:02.484272       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.5µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0429 00:08:02.485404       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0429 00:08:02.486881       1 timeout.go:142] post-timeout activity - time-elapsed: 2.861712ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0429 00:08:02.642721       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 00:08:02.684736       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 00:08:02.712741       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 00:08:15.229730       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 00:08:15.308254       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 00:23:49.502033       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49293: use of closed network connection
	E0429 00:23:50.824153       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49301: use of closed network connection
	E0429 00:23:51.986308       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49309: use of closed network connection
	E0429 00:24:25.826543       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49329: use of closed network connection
	E0429 00:24:36.281538       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49332: use of closed network connection
	I0429 00:27:56.312329       1 trace.go:236] Trace[1132022318]: "Update" accept:application/json, */*,audit-id:b430ffa2-60e5-4395-a53d-a8ebd619d367,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (29-Apr-2024 00:27:55.724) (total time: 587ms):
	Trace[1132022318]: ["GuaranteedUpdate etcd3" audit-id:b430ffa2-60e5-4395-a53d-a8ebd619d367,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 587ms (00:27:55.725)
	Trace[1132022318]:  ---"Txn call completed" 586ms (00:27:56.312)]
	Trace[1132022318]: [587.55203ms] [587.55203ms] END
	I0429 00:28:11.413223       1 trace.go:236] Trace[768089845]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.27.226.61,type:*v1.Endpoints,resource:apiServerIPInfo (29-Apr-2024 00:28:10.678) (total time: 734ms):
	Trace[768089845]: ---"Transaction prepared" 338ms (00:28:11.058)
	Trace[768089845]: ---"Txn call completed" 354ms (00:28:11.413)
	Trace[768089845]: [734.530496ms] [734.530496ms] END
	
	
	==> kube-controller-manager [988ba6e93dbd] <==
	I0429 00:08:29.407024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="145.8µs"
	I0429 00:08:29.410999       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.9µs"
	I0429 00:08:29.438715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58µs"
	I0429 00:08:29.463289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.4µs"
	I0429 00:08:30.150197       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 00:08:32.178168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.562718ms"
	I0429 00:08:32.178767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="128.296µs"
	I0429 00:08:32.227761       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.198293ms"
	I0429 00:08:32.228518       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.397µs"
	I0429 00:12:22.804126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.965383ms"
	I0429 00:12:22.823038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.733135ms"
	I0429 00:12:22.823277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.2µs"
	I0429 00:12:22.828995       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.1µs"
	I0429 00:12:22.829468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.999µs"
	I0429 00:12:25.591541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.187606ms"
	I0429 00:12:25.591791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="155.1µs"
	I0429 00:28:02.170352       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-267500-m03\" does not exist"
	I0429 00:28:02.230498       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-267500-m03" podCIDRs=["10.244.1.0/24"]
	I0429 00:28:05.393266       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-267500-m03"
	I0429 00:28:19.456843       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-267500-m03"
	I0429 00:28:19.485470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.1µs"
	I0429 00:28:19.487549       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.6µs"
	I0429 00:28:19.505362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.4µs"
	I0429 00:28:22.722440       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.931424ms"
	I0429 00:28:22.722950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.301µs"
	
	
	==> kube-proxy [b505176bff8d] <==
	I0429 00:08:18.378677       1 server_linux.go:69] "Using iptables proxy"
	I0429 00:08:18.445828       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.226.61"]
	I0429 00:08:18.505105       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:08:18.505147       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:08:18.505201       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:08:18.511281       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:08:18.512271       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:08:18.512309       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:08:18.516363       1 config.go:192] "Starting service config controller"
	I0429 00:08:18.517198       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:08:18.517237       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:08:18.517245       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:08:18.524551       1 config.go:319] "Starting node config controller"
	I0429 00:08:18.524570       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:08:18.618172       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 00:08:18.618299       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:08:18.624657       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8e1e8e3ae83a] <==
	W0429 00:07:59.408672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 00:07:59.409434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 00:07:59.614629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.614883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.614630       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.616141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.671538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.671604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.688105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 00:07:59.688348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 00:07:59.699454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 00:07:59.699500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 00:07:59.827114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 00:07:59.827663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 00:07:59.863569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 00:07:59.864226       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 00:07:59.922434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 00:07:59.922488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 00:07:59.934988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 00:07:59.935206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 00:07:59.935823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 00:07:59.936001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 00:07:59.940321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 00:07:59.940831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0429 00:08:01.614591       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 00:28:02 ha-267500 kubelet[2223]: E0429 00:28:02.772180    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:28:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:28:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:28:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:28:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:29:02 ha-267500 kubelet[2223]: E0429 00:29:02.767197    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:29:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:29:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:29:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:29:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:30:02 ha-267500 kubelet[2223]: E0429 00:30:02.771457    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:30:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:30:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:30:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:30:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:31:02 ha-267500 kubelet[2223]: E0429 00:31:02.770173    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:31:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:31:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:31:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:31:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:32:02 ha-267500 kubelet[2223]: E0429 00:32:02.768148    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:32:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:32:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:32:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:32:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:32:38.541319    3084 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500: (11.5426937s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-267500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-wg44s
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-267500 describe pod busybox-fc5497c4f-wg44s
helpers_test.go:282: (dbg) kubectl --context ha-267500 describe pod busybox-fc5497c4f-wg44s:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-wg44s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bv7kl (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-bv7kl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  5m27s (x5 over 20m)    default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  4m30s (x2 over 4m40s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (94.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (43.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.0952373s)
ha_test.go:413: expected profile "ha-267500" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-267500\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-267500\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\
":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-267500\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"172.27.239.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.27.226.61\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"172.27.238.86\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"172.27.233.131\",\"Port\":0,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugi
n\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube1:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizati
ons\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500: (11.6686269s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-267500 logs -n 25: (8.0848957s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT | 28 Apr 24 17:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT | 28 Apr 24 17:24 PDT |
	|         | busybox-fc5497c4f-5xln2              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-5xln2 -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.224.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-wg44s              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-267500 -v=7                | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:25 PDT | 28 Apr 24 17:28 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	| node    | ha-267500 node stop m02 -v=7         | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:31 PDT | 28 Apr 24 17:32 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 17:05:00
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 17:05:00.635889   15128 out.go:291] Setting OutFile to fd 1448 ...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.636883   15128 out.go:304] Setting ErrFile to fd 980...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.660527   15128 out.go:298] Setting JSON to false
	I0428 17:05:00.664060   15128 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6543,"bootTime":1714342556,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 17:05:00.664060   15128 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 17:05:00.669160   15128 out.go:177] * [ha-267500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 17:05:00.673143   15128 notify.go:220] Checking for updates...
	I0428 17:05:00.675298   15128 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:05:00.677914   15128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 17:05:00.680526   15128 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 17:05:00.682871   15128 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 17:05:00.686326   15128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 17:05:00.689521   15128 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 17:05:05.728109   15128 out.go:177] * Using the hyperv driver based on user configuration
	I0428 17:05:05.733726   15128 start.go:297] selected driver: hyperv
	I0428 17:05:05.733726   15128 start.go:901] validating driver "hyperv" against <nil>
	I0428 17:05:05.733888   15128 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 17:05:05.779166   15128 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 17:05:05.780739   15128 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 17:05:05.780739   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:05:05.780739   15128 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0428 17:05:05.780739   15128 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0428 17:05:05.780739   15128 start.go:340] cluster config:
	{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:05:05.781443   15128 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 17:05:05.786272   15128 out.go:177] * Starting "ha-267500" primary control-plane node in "ha-267500" cluster
	I0428 17:05:05.789365   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:05:05.790343   15128 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 17:05:05.790343   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:05:05.790810   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:05:05.791000   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:05:05.791210   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:05:05.791210   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json: {Name:mk9d04dce876aeea74569e2a12d8158542a180a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:360] acquireMachinesLock for ha-267500: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500"
	I0428 17:05:05.793473   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:05:05.793473   15128 start.go:125] createHost starting for "" (driver="hyperv")
	I0428 17:05:05.798458   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:05:05.798458   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:05:05.799075   15128 client.go:168] LocalClient.Create starting
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:05:07.765342   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:05:07.765366   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:07.765483   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:05:09.466609   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:10.942750   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:14.309202   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:05:14.797607   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: Creating VM...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:17.596457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:17.596534   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:17.596629   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:05:17.596740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:19.370912   15128 main.go:141] libmachine: Creating VHD
	I0428 17:05:19.370912   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:05:22.987163   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6323F08D-1941-41F6-AECD-59FDB38477C4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:05:22.987787   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:22.987787   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:05:22.987950   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:05:22.997062   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:05:26.067081   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:26.067395   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:26.067482   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -SizeBytes 20000MB
	I0428 17:05:28.607147   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-267500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:32.186340   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500 -DynamicMemoryEnabled $false
	I0428 17:05:34.304828   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500 -Count 2
	I0428 17:05:36.364288   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:36.365155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:36.365244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\boot2docker.iso'
	I0428 17:05:38.788294   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd'
	I0428 17:05:41.250474   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: Starting VM...
	I0428 17:05:41.251660   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:48.796976   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:48.797051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:49.812421   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:51.911514   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:51.912240   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:51.912333   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:54.389553   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:54.389603   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:55.396985   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:57.532241   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:59.865311   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:59.865354   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:00.867371   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:06.311485   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:10.915736   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:10.916779   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:10.916848   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:12.945722   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:14.977649   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:17.403860   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:17.413822   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:17.413822   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:06:17.548827   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:06:17.549001   15128 buildroot.go:166] provisioning hostname "ha-267500"
	I0428 17:06:17.549001   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:21.963707   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:21.963891   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:21.969614   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:21.970234   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:21.970287   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500 && echo "ha-267500" | sudo tee /etc/hostname
	I0428 17:06:22.125673   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500
	
	I0428 17:06:22.125673   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:24.116148   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:26.498042   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:26.498298   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:26.504621   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:26.505426   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:26.505426   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:06:26.654593   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:06:26.654745   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:06:26.654745   15128 buildroot.go:174] setting up certificates
	I0428 17:06:26.654878   15128 provision.go:84] configureAuth start
	I0428 17:06:26.654974   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:28.643033   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:31.047712   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:33.032385   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:33.033114   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:33.033244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:35.470487   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:35.470551   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:35.470602   15128 provision.go:143] copyHostCerts
	I0428 17:06:35.470602   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:06:35.470602   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:06:35.470602   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:06:35.471409   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:06:35.472302   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:06:35.472302   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:06:35.474368   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:06:35.475508   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:06:35.477084   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500 san=[127.0.0.1 172.27.226.61 ha-267500 localhost minikube]
	I0428 17:06:35.561808   15128 provision.go:177] copyRemoteCerts
	I0428 17:06:35.577487   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:06:35.577487   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:37.564802   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:40.009619   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:06:40.122812   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5453174s)
	I0428 17:06:40.122812   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:06:40.124516   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:06:40.170921   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:06:40.171551   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0428 17:06:40.219603   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:06:40.219603   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:06:40.266084   15128 provision.go:87] duration metric: took 13.6111193s to configureAuth
	I0428 17:06:40.266084   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:06:40.266857   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:06:40.267021   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:42.241914   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:44.637923   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:44.637923   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:44.637923   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:06:44.774113   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:06:44.774113   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:06:44.774113   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:06:44.774650   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:46.777708   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:46.778317   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:46.778401   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:49.187437   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:49.187970   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:49.188102   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:06:49.338418   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:06:49.339201   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:51.331459   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:53.762358   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:53.763024   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:53.763024   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:06:55.964469   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:06:55.964469   15128 machine.go:97] duration metric: took 43.0186778s to provisionDockerMachine
	I0428 17:06:55.964469   15128 client.go:171] duration metric: took 1m50.1652174s to LocalClient.Create
	I0428 17:06:55.964469   15128 start.go:167] duration metric: took 1m50.1658343s to libmachine.API.Create "ha-267500"
	I0428 17:06:55.965115   15128 start.go:293] postStartSetup for "ha-267500" (driver="hyperv")
	I0428 17:06:55.965216   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:06:55.979546   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:06:55.979546   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:57.968316   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:57.969137   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:57.969264   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:00.415449   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:00.415502   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:00.415502   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:00.529139   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5495858s)
	I0428 17:07:00.542143   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:07:00.550032   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:07:00.550213   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:07:00.550570   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:07:00.551284   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:07:00.551284   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:07:00.565509   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:07:00.584743   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:07:00.629457   15128 start.go:296] duration metric: took 4.6642336s for postStartSetup
	I0428 17:07:00.635014   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:02.626728   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:02.627487   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:02.627874   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:05.092989   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:05.093104   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:05.093386   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:07:05.096398   15128 start.go:128] duration metric: took 1m59.3027333s to createHost
	I0428 17:07:05.096398   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:07.065139   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:07.066155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:07.066393   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:09.551453   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:09.552365   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:09.558305   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:09.559011   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:09.559011   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:07:09.695211   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349229.688972111
	
	I0428 17:07:09.695211   15128 fix.go:216] guest clock: 1714349229.688972111
	I0428 17:07:09.695293   15128 fix.go:229] Guest: 2024-04-28 17:07:09.688972111 -0700 PDT Remote: 2024-04-28 17:07:05.096398 -0700 PDT m=+124.563135001 (delta=4.592574111s)
	I0428 17:07:09.695407   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:11.789797   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:11.789847   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:11.789990   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:14.240619   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:14.240815   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:14.240815   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349229
	I0428 17:07:14.381527   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:07:09 UTC 2024
	
	I0428 17:07:14.381591   15128 fix.go:236] clock set: Mon Apr 29 00:07:09 UTC 2024
	 (err=<nil>)
	I0428 17:07:14.381591   15128 start.go:83] releasing machines lock for "ha-267500", held for 2m8.5881066s
	I0428 17:07:14.381888   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:16.379116   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:18.842518   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:07:18.842698   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:18.852567   15128 ssh_runner.go:195] Run: cat /version.json
	I0428 17:07:18.853571   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.911012   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:20.912913   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.913115   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.913211   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:23.515321   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.515423   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.515870   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.545848   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: cat /version.json: (4.8814384s)
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8914872s)
	I0428 17:07:23.747746   15128 ssh_runner.go:195] Run: systemctl --version
	I0428 17:07:23.771255   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 17:07:23.781524   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:07:23.793701   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:07:23.822613   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:07:23.822613   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:23.822613   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:23.866813   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:07:23.903238   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:07:23.922743   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:07:23.934150   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:07:23.963653   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:23.994818   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:07:24.027248   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:24.060207   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:07:24.094263   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:07:24.140407   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:07:24.173847   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:07:24.204942   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:07:24.241686   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:07:24.271540   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:24.469049   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:07:24.498779   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:24.511314   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:07:24.547731   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.585442   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:07:24.632453   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.665555   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.704256   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:07:24.766295   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.792824   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:24.839067   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:07:24.857950   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:07:24.877113   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:07:24.928235   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:07:25.145493   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:07:25.342459   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:07:25.342632   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:07:25.392872   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:25.606530   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:28.159251   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5517925s)
	I0428 17:07:28.171034   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0428 17:07:28.211210   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.251460   15128 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0428 17:07:28.457673   15128 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0428 17:07:28.655447   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:28.858401   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0428 17:07:28.905418   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.943568   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:29.150079   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0428 17:07:29.264527   15128 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0428 17:07:29.277774   15128 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0428 17:07:29.287734   15128 start.go:562] Will wait 60s for crictl version
	I0428 17:07:29.298726   15128 ssh_runner.go:195] Run: which crictl
	I0428 17:07:29.316760   15128 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 17:07:29.366950   15128 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0428 17:07:29.376977   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.418646   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.453698   15128 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0428 17:07:29.453698   15128 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d9:46:b5 Flags:up|broadcast|multicast|running}
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: fe80::ddcc:c7f1:f829:ae2f/64
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: 172.27.224.1/20
	I0428 17:07:29.473489   15128 ssh_runner.go:195] Run: grep 172.27.224.1	host.minikube.internal$ /etc/hosts
	I0428 17:07:29.479885   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:29.514603   15128 kubeadm.go:877] updating cluster {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 17:07:29.514603   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:07:29.523620   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:29.550369   15128 docker.go:685] Got preloaded images: 
	I0428 17:07:29.550483   15128 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0428 17:07:29.562702   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:29.593952   15128 ssh_runner.go:195] Run: which lz4
	I0428 17:07:29.600117   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0428 17:07:29.613555   15128 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0428 17:07:29.619890   15128 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0428 17:07:29.619890   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0428 17:07:31.519069   15128 docker.go:649] duration metric: took 1.9189486s to copy over tarball
	I0428 17:07:31.533069   15128 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0428 17:07:40.472773   15128 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9396898s)
	I0428 17:07:40.472925   15128 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0428 17:07:40.541351   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:40.567273   15128 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0428 17:07:40.619221   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:40.837523   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:44.196770   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3592418s)
	I0428 17:07:44.207767   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:44.237423   15128 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0428 17:07:44.237484   15128 cache_images.go:84] Images are preloaded, skipping loading
	I0428 17:07:44.237484   15128 kubeadm.go:928] updating node { 172.27.226.61 8443 v1.30.0 docker true true} ...
	I0428 17:07:44.237484   15128 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-267500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.226.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 17:07:44.246763   15128 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0428 17:07:44.282127   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:07:44.282216   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:07:44.282216   15128 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 17:07:44.282351   15128 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.226.61 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-267500 NodeName:ha-267500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.226.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.226.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 17:07:44.282455   15128 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.226.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-267500"
	  kubeletExtraArgs:
	    node-ip: 172.27.226.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.226.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 17:07:44.282455   15128 kube-vip.go:111] generating kube-vip config ...
	I0428 17:07:44.297487   15128 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0428 17:07:44.321501   15128 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0428 17:07:44.322489   15128 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0428 17:07:44.337281   15128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 17:07:44.356448   15128 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 17:07:44.368828   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0428 17:07:44.388733   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0428 17:07:44.419285   15128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 17:07:44.454529   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0428 17:07:44.492910   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0428 17:07:44.535119   15128 ssh_runner.go:195] Run: grep 172.27.239.254	control-plane.minikube.internal$ /etc/hosts
	I0428 17:07:44.544353   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:44.584071   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:44.784658   15128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 17:07:44.813138   15128 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500 for IP: 172.27.226.61
	I0428 17:07:44.813138   15128 certs.go:194] generating shared ca certs ...
	I0428 17:07:44.813138   15128 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:44.814022   15128 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0428 17:07:44.814402   15128 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0428 17:07:44.814630   15128 certs.go:256] generating profile certs ...
	I0428 17:07:44.815376   15128 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key
	I0428 17:07:44.815452   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt with IP's: []
	I0428 17:07:45.207682   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt ...
	I0428 17:07:45.207682   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt: {Name:mkad69168dad75f83e0efa34e0b67056be851f25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.209661   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key ...
	I0428 17:07:45.209661   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key: {Name:mkb880ba41d02f89477ac0bc036a3238bb214c19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.210642   15128 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3
	I0428 17:07:45.211691   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.226.61 172.27.239.254]
	I0428 17:07:45.272240   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 ...
	I0428 17:07:45.272240   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3: {Name:mk99fb8942eac42f7e59971118a5e983aa693542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.273362   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 ...
	I0428 17:07:45.273362   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3: {Name:mkdcebf54b68db40ea28398d3bc9d7030e2380c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.274711   15128 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt
	I0428 17:07:45.286842   15128 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key
	I0428 17:07:45.287930   15128 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key
	I0428 17:07:45.288916   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt with IP's: []
	I0428 17:07:45.392345   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt ...
	I0428 17:07:45.392345   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt: {Name:mk043c6e778c0a46cac3b2815bc508f265aae077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.394630   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key ...
	I0428 17:07:45.394630   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key: {Name:mk9cbeba2bc7745cd3561dc98b61ab1be7e0e2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.395971   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 17:07:45.396701   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 17:07:45.396840   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 17:07:45.396982   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 17:07:45.397123   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 17:07:45.404414   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 17:07:45.405312   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem (1338 bytes)
	W0428 17:07:45.405975   15128 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228_empty.pem, impossibly tiny 0 bytes
	I0428 17:07:45.406015   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0428 17:07:45.406268   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0428 17:07:45.406623   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0428 17:07:45.406886   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0428 17:07:45.407157   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem (1708 bytes)
	I0428 17:07:45.407157   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /usr/share/ca-certificates/32282.pem
	I0428 17:07:45.407872   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:45.408049   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem -> /usr/share/ca-certificates/3228.pem
	I0428 17:07:45.408290   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 17:07:45.465598   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0428 17:07:45.514624   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 17:07:45.563309   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 17:07:45.610689   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0428 17:07:45.668205   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0428 17:07:45.709224   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 17:07:45.760227   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0428 17:07:45.808948   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /usr/share/ca-certificates/32282.pem (1708 bytes)
	I0428 17:07:45.867908   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 17:07:45.915616   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem --> /usr/share/ca-certificates/3228.pem (1338 bytes)
	I0428 17:07:45.964791   15128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 17:07:46.023214   15128 ssh_runner.go:195] Run: openssl version
	I0428 17:07:46.048823   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32282.pem && ln -fs /usr/share/ca-certificates/32282.pem /etc/ssl/certs/32282.pem"
	I0428 17:07:46.088573   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.097176   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.109096   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.132635   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32282.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 17:07:46.166258   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 17:07:46.204585   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.212881   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.228291   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.251359   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 17:07:46.286250   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3228.pem && ln -fs /usr/share/ca-certificates/3228.pem /etc/ssl/certs/3228.pem"
	I0428 17:07:46.330437   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.337213   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.348616   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.369695   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3228.pem /etc/ssl/certs/51391683.0"
	I0428 17:07:46.404629   15128 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 17:07:46.416103   15128 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 17:07:46.416103   15128 kubeadm.go:391] StartCluster: {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:07:46.427776   15128 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 17:07:46.462126   15128 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 17:07:46.492998   15128 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 17:07:46.525017   15128 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 17:07:46.543389   15128 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 17:07:46.543449   15128 kubeadm.go:156] found existing configuration files:
	
	I0428 17:07:46.559558   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 17:07:46.576906   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 17:07:46.591547   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 17:07:46.622617   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 17:07:46.643274   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 17:07:46.657479   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 17:07:46.687575   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.704724   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 17:07:46.717169   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.749254   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 17:07:46.767247   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 17:07:46.779268   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 17:07:46.798138   15128 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0428 17:07:47.295492   15128 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0428 17:08:03.206037   15128 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0428 17:08:03.206217   15128 kubeadm.go:309] [preflight] Running pre-flight checks
	I0428 17:08:03.206547   15128 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0428 17:08:03.206720   15128 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0428 17:08:03.207017   15128 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0428 17:08:03.207166   15128 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 17:08:03.211078   15128 out.go:204]   - Generating certificates and keys ...
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0428 17:08:03.212047   15128 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0428 17:08:03.212253   15128 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0428 17:08:03.212452   15128 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.213396   15128 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 17:08:03.214403   15128 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 17:08:03.214647   15128 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 17:08:03.217496   15128 out.go:204]   - Booting up control plane ...
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 17:08:03.218523   15128 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 17:08:03.218673   15128 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0428 17:08:03.218845   15128 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002004724s
	I0428 17:08:03.219380   15128 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0428 17:08:03.219512   15128 kubeadm.go:309] [api-check] The API server is healthy after 9.018382318s
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0428 17:08:03.219547   15128 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0428 17:08:03.219547   15128 kubeadm.go:309] [mark-control-plane] Marking the node ha-267500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0428 17:08:03.219547   15128 kubeadm.go:309] [bootstrap-token] Using token: o2t0fz.gqoxv8rhmbtgnafl
	I0428 17:08:03.222077   15128 out.go:204]   - Configuring RBAC rules ...
	I0428 17:08:03.223255   15128 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0428 17:08:03.223390   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0428 17:08:03.223700   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0428 17:08:03.224022   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0428 17:08:03.224356   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0428 17:08:03.224673   15128 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0428 17:08:03.224822   15128 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0428 17:08:03.224822   15128 kubeadm.go:309] 
	I0428 17:08:03.224822   15128 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0428 17:08:03.225393   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.226084   15128 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0428 17:08:03.226084   15128 kubeadm.go:309] 
	I0428 17:08:03.226252   15128 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0428 17:08:03.226279   15128 kubeadm.go:309] 
	I0428 17:08:03.226368   15128 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0428 17:08:03.226368   15128 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0428 17:08:03.226368   15128 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0428 17:08:03.226368   15128 kubeadm.go:309] 
	I0428 17:08:03.226941   15128 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0428 17:08:03.227102   15128 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0428 17:08:03.227102   15128 kubeadm.go:309] 
	I0428 17:08:03.227370   15128 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--control-plane 
	I0428 17:08:03.227509   15128 kubeadm.go:309] 
	I0428 17:08:03.227814   15128 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0428 17:08:03.227814   15128 kubeadm.go:309] 
	I0428 17:08:03.228020   15128 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.228020   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c 
	I0428 17:08:03.228020   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:08:03.228020   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:08:03.230920   15128 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0428 17:08:03.245586   15128 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0428 17:08:03.254991   15128 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0428 17:08:03.255049   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0428 17:08:03.307618   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0428 17:08:04.087321   15128 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 17:08:04.101185   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-267500 minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=ha-267500 minikube.k8s.io/primary=true
	I0428 17:08:04.110392   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.127454   15128 ops.go:34] apiserver oom_adj: -16
	I0428 17:08:04.338961   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.853452   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.339051   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.843300   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.345394   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.842588   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.347466   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.845426   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.343954   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.844666   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.346016   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.847106   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.346157   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.852073   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.350599   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.851124   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.339498   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.839469   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.341674   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.844363   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.340478   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.840892   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.351020   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.542789   15128 kubeadm.go:1107] duration metric: took 11.4553488s to wait for elevateKubeSystemPrivileges
	W0428 17:08:15.542884   15128 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0428 17:08:15.542948   15128 kubeadm.go:393] duration metric: took 29.1267984s to StartCluster
	I0428 17:08:15.542948   15128 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.543147   15128 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:15.545087   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.546714   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0428 17:08:15.546792   15128 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:15.546862   15128 start.go:240] waiting for startup goroutines ...
	I0428 17:08:15.546921   15128 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0428 17:08:15.547043   15128 addons.go:69] Setting storage-provisioner=true in profile "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:234] Setting addon storage-provisioner=true in "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:69] Setting default-storageclass=true in profile "ha-267500"
	I0428 17:08:15.547186   15128 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-267500"
	I0428 17:08:15.547186   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:15.547418   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.760123   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0428 17:08:16.117515   15128 start.go:946] {"host.minikube.internal": 172.27.224.1} host record injected into CoreDNS's ConfigMap
	I0428 17:08:17.727218   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.731020   15128 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 17:08:17.728718   15128 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:17.731866   15128 kapi.go:59] client config for ha-267500: &rest.Config{Host:"https://172.27.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 17:08:17.733765   15128 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:17.733849   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0428 17:08:17.733849   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:17.735131   15128 cert_rotation.go:137] Starting client certificate rotation controller
	I0428 17:08:17.735131   15128 addons.go:234] Setting addon default-storageclass=true in "ha-267500"
	I0428 17:08:17.735756   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:17.736495   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.022150   15128 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:20.022150   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.024648   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.176019   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:22.176993   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.177104   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.649653   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:22.838833   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:23.942043   15128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1032083s)
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:24.736869   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:24.878922   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:25.036824   15128 round_trippers.go:463] GET https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0428 17:08:25.036824   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.036824   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.036824   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.047850   15128 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0428 17:08:25.050270   15128 round_trippers.go:463] PUT https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0428 17:08:25.050270   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Content-Type: application/json
	I0428 17:08:25.050270   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.054895   15128 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 17:08:25.058644   15128 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0428 17:08:25.062323   15128 addons.go:505] duration metric: took 9.5154456s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0428 17:08:25.062323   15128 start.go:245] waiting for cluster config update ...
	I0428 17:08:25.062323   15128 start.go:254] writing updated cluster config ...
	I0428 17:08:25.064855   15128 out.go:177] 
	I0428 17:08:25.074876   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:25.074876   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.081680   15128 out.go:177] * Starting "ha-267500-m02" control-plane node in "ha-267500" cluster
	I0428 17:08:25.084831   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:08:25.084949   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:08:25.085245   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:08:25.085467   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:08:25.085668   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.089909   15128 start.go:360] acquireMachinesLock for ha-267500-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:08:25.089909   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500-m02"
	I0428 17:08:25.089909   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:25.089909   15128 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0428 17:08:25.092669   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:08:25.092669   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:08:25.092669   15128 client.go:168] LocalClient.Create starting
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:08:26.932082   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:08:26.932249   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:26.932469   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:08:28.625007   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:08:28.625741   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:28.625836   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:30.145128   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:30.145193   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:30.145352   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:33.641047   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:33.641341   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:33.643919   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:08:34.107074   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:08:34.283136   15128 main.go:141] libmachine: Creating VM...
	I0428 17:08:34.284168   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:37.085226   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:37.085497   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:38.799740   15128 main.go:141] libmachine: Creating VHD
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1C4811B2-F108-4C17-8C85-240087500FFB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:08:42.443176   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:08:45.530814   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -SizeBytes 20000MB
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:08:51.507051   15128 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-267500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:08:51.507121   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:51.507184   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500-m02 -DynamicMemoryEnabled $false
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:53.623959   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500-m02 -Count 2
	I0428 17:08:55.746706   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:55.747282   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:55.747376   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\boot2docker.iso'
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:58.231298   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd'
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: Starting VM...
	I0428 17:09:00.819246   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500-m02
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:08.535107   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:08.535676   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:09.540110   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:11.730252   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:11.730767   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:11.730896   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:14.267320   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:14.267920   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:15.278102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:17.429662   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:19.872667   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:19.873239   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:20.874059   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:23.049283   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:25.483021   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:25.483840   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:26.497330   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:28.593193   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:31.092830   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:33.155893   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:33.156190   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:33.156190   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:09:33.156343   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:37.708958   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:37.709094   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:37.715262   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:37.715453   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:37.715453   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:09:37.838307   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:09:37.838307   15128 buildroot.go:166] provisioning hostname "ha-267500-m02"
	I0428 17:09:37.838307   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:39.845337   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:39.845507   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:39.845582   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:42.372033   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:42.372654   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:42.379934   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:42.380083   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:42.380083   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500-m02 && echo "ha-267500-m02" | sudo tee /etc/hostname
	I0428 17:09:42.534583   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500-m02
	
	I0428 17:09:42.534727   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:44.674240   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:47.257595   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:47.258189   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:47.258189   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:09:47.404787   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:09:47.404787   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:09:47.404787   15128 buildroot.go:174] setting up certificates
	I0428 17:09:47.404787   15128 provision.go:84] configureAuth start
	I0428 17:09:47.404787   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:51.875853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:53.926853   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:53.927030   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:53.927102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:56.411706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:56.412682   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:56.412682   15128 provision.go:143] copyHostCerts
	I0428 17:09:56.412881   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:09:56.413201   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:09:56.413201   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:09:56.413699   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:09:56.414916   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:09:56.415172   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:09:56.417043   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:09:56.417043   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:09:56.417043   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:09:56.417691   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:09:56.418448   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500-m02 san=[127.0.0.1 172.27.238.86 ha-267500-m02 localhost minikube]
	I0428 17:09:56.698158   15128 provision.go:177] copyRemoteCerts
	I0428 17:09:56.713232   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:09:56.713232   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:58.727438   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:58.728437   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:58.728572   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:01.200219   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:01.303703   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5904121s)
	I0428 17:10:01.303703   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:10:01.304216   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:10:01.351115   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:10:01.351613   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0428 17:10:01.399941   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:10:01.400279   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:10:01.447643   15128 provision.go:87] duration metric: took 14.0428334s to configureAuth
	I0428 17:10:01.447643   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:10:01.448198   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:10:01.448388   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:03.470041   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:05.925618   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:05.926194   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:05.926194   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:10:06.056503   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:10:06.056605   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:10:06.056795   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:10:06.056855   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:08.084596   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:10.593844   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:10.594210   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:10.600708   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:10.601470   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:10.601470   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.226.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:10:10.751881   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.226.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:10:10.751947   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:12.904363   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:15.479691   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:15.479915   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:15.486849   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:15.487030   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:15.487030   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:10:17.663081   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:10:17.663081   15128 machine.go:97] duration metric: took 44.506824s to provisionDockerMachine
	I0428 17:10:17.663081   15128 client.go:171] duration metric: took 1m52.570239s to LocalClient.Create
	I0428 17:10:17.663081   15128 start.go:167] duration metric: took 1m52.570239s to libmachine.API.Create "ha-267500"
	I0428 17:10:17.663081   15128 start.go:293] postStartSetup for "ha-267500-m02" (driver="hyperv")
	I0428 17:10:17.663081   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:10:17.677002   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:10:17.677002   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:19.758853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:22.318985   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:22.423330   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7463207s)
	I0428 17:10:22.436053   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:10:22.443505   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:10:22.443505   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:10:22.444052   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:10:22.445207   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:10:22.445207   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:10:22.458722   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:10:22.477786   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:10:22.526087   15128 start.go:296] duration metric: took 4.8629979s for postStartSetup
	I0428 17:10:22.528901   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:27.084100   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:10:27.086385   15128 start.go:128] duration metric: took 2m1.9962875s to createHost
	I0428 17:10:27.086385   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:29.131174   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:31.572065   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:31.572369   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:31.578077   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:31.578656   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:31.578656   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349431.710726684
	
	I0428 17:10:31.707789   15128 fix.go:216] guest clock: 1714349431.710726684
	I0428 17:10:31.707789   15128 fix.go:229] Guest: 2024-04-28 17:10:31.710726684 -0700 PDT Remote: 2024-04-28 17:10:27.0863856 -0700 PDT m=+326.552805801 (delta=4.624341084s)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:36.218864   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:36.219399   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:36.219663   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349431
	I0428 17:10:36.353520   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:10:31 UTC 2024
	
	I0428 17:10:36.353602   15128 fix.go:236] clock set: Mon Apr 29 00:10:31 UTC 2024
	 (err=<nil>)
	I0428 17:10:36.353602   15128 start.go:83] releasing machines lock for "ha-267500-m02", held for 2m11.26349s
	I0428 17:10:36.353795   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:38.401891   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:40.883767   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:40.883929   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:40.887007   15128 out.go:177] * Found network options:
	I0428 17:10:40.889514   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.892316   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.894427   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.897007   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 17:10:40.898142   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.900035   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:10:40.900035   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:40.912127   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 17:10:40.913152   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:43.021173   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.602076   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.622078   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.622258   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.622506   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.694842   15128 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7816825s)
	W0428 17:10:45.694980   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:10:45.707857   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:10:45.811368   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:10:45.811368   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:45.811368   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.911325s)
	I0428 17:10:45.811813   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:45.869634   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:10:45.905032   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:10:45.930324   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:10:45.946027   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:10:45.978279   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.013710   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:10:46.061695   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.102008   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:10:46.135573   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:10:46.171642   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:10:46.204807   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:10:46.239021   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:10:46.271655   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:10:46.306942   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:46.514038   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:10:46.544941   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:46.560491   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:10:46.605547   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.654104   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:10:46.708544   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.748048   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.784762   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:10:46.849187   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.873497   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:46.927545   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:10:46.944545   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:10:46.962213   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:10:47.010730   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:10:47.237397   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:10:47.429784   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:10:47.429870   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:10:47.474822   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:47.662962   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:11:48.797471   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1344114s)
	I0428 17:11:48.811984   15128 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0428 17:11:48.846867   15128 out.go:177] 
	W0428 17:11:48.851004   15128 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 00:10:16 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.119534579Z" level=info msg="Starting up"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.120740894Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.121661806Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.164120251Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189883081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189945482Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190009182Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190026683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190220685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190263486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190520589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190669591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190716191Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190728492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190839193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.191192898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194247737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194367638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194558841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194663742Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194795944Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195368451Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195462552Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220446573Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220530874Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220815977Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220940379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220961379Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221231583Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221822990Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222033793Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222143394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222181895Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222200695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222229595Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222251396Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222320897Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222367097Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222383497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222398798Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222414398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222438198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222458898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222474399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222508799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222524499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222540899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222555500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222572000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222588200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222612300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222628301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222643801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222659801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222679401Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222703802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222745302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222782703Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222911604Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222975905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222992605Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223005105Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223156807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223197908Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223212708Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229340687Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229588390Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.230467901Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.231131810Z" level=info msg="containerd successfully booted in 0.070317s"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.196765446Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.225741894Z" level=info msg="Loading containers: start."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.520224287Z" level=info msg="Loading containers: done."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.548826467Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.549157372Z" level=info msg="Daemon has completed initialization"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663745997Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663852398Z" level=info msg="API listen on [::]:2376"
	Apr 29 00:10:17 ha-267500-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 00:10:47 ha-267500-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.694032846Z" level=info msg="Processing signal 'terminated'"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696514258Z" level=info msg="Daemon shutdown complete"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696708859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696755859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696775959Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:48 ha-267500-m02 dockerd[1016]: time="2024-04-29T00:10:48.770678285Z" level=info msg="Starting up"
	Apr 29 00:11:48 ha-267500-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0428 17:11:48.851004   15128 out.go:239] * 
	W0428 17:11:48.852842   15128 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 17:11:48.855427   15128 out.go:177] 
	
	
	==> Docker <==
	Apr 29 00:29:15 ha-267500 dockerd[1316]: 2024/04/29 00:29:15 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:32:45 ha-267500 dockerd[1316]: 2024/04/29 00:32:45 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:32:45 ha-267500 dockerd[1316]: 2024/04/29 00:32:45 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:32:46 ha-267500 dockerd[1316]: 2024/04/29 00:32:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:32:46 ha-267500 dockerd[1316]: 2024/04/29 00:32:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:32:46 ha-267500 dockerd[1316]: 2024/04/29 00:32:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:32:46 ha-267500 dockerd[1316]: 2024/04/29 00:32:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:32:46 ha-267500 dockerd[1316]: 2024/04/29 00:32:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:32:46 ha-267500 dockerd[1316]: 2024/04/29 00:32:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8d1eabc40263       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago      Running             busybox                   0                   9e5d506c62d64       busybox-fc5497c4f-5xln2
	863860b786b42       cbb01a7bd410d                                                                                         24 minutes ago      Running             coredns                   0                   c1f590ad490fe       coredns-7db6d8ff4d-p7tjz
	f85260746d557       cbb01a7bd410d                                                                                         24 minutes ago      Running             coredns                   0                   586f91a6b0d3d       coredns-7db6d8ff4d-2d6ct
	f23ff280b691c       6e38f40d628db                                                                                         24 minutes ago      Running             storage-provisioner       0                   4f7c6837c24bd       storage-provisioner
	31e97721c439f       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago      Running             kindnet-cni               0                   9a810f16fad2b       kindnet-6pr2b
	b505176bff8dd       a0bf559e280cf                                                                                         25 minutes ago      Running             kube-proxy                0                   f041e2ebf6955       kube-proxy-59kz7
	e8de8cc5d0941       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     25 minutes ago      Running             kube-vip                  0                   5e6adedaca2d1       kube-vip-ha-267500
	1bb77467f58fc       3861cfcd7c04c                                                                                         25 minutes ago      Running             etcd                      0                   bd2f63e7ff884       etcd-ha-267500
	e3f1a76ec8d43       c42f13656d0b2                                                                                         25 minutes ago      Running             kube-apiserver            0                   1aac39df0e147       kube-apiserver-ha-267500
	8e1e8e3ae83a4       259c8277fcbbc                                                                                         25 minutes ago      Running             kube-scheduler            0                   59e9e09e1fe2e       kube-scheduler-ha-267500
	988ba6e93dbd2       c7aad43836fa5                                                                                         25 minutes ago      Running             kube-controller-manager   0                   b062edd237fa4       kube-controller-manager-ha-267500
	
	
	==> coredns [863860b786b4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56042 - 38920 "HINFO IN 6310058863699759000.886894576477842994. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026858243s
	[INFO] 10.244.0.4:52183 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.109912239s
	[INFO] 10.244.0.4:36966 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.019781143s
	[INFO] 10.244.0.4:50436 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.124688347s
	[INFO] 10.244.0.4:39307 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231401s
	[INFO] 10.244.0.4:48774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000438101s
	[INFO] 10.244.0.4:55657 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001919s
	[INFO] 10.244.0.4:39536 - 5 "PTR IN 1.224.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000243301s
	
	
	==> coredns [f85260746d55] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52661 - 10332 "HINFO IN 6890724632724915343.2842102422429648823. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049972505s
	[INFO] 10.244.0.4:36002 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189801s
	[INFO] 10.244.0.4:39517 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002061s
	[INFO] 10.244.0.4:58443 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.132665688s
	[INFO] 10.244.0.4:58628 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000428701s
	[INFO] 10.244.0.4:35412 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002027s
	[INFO] 10.244.0.4:55943 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.02269265s
	[INFO] 10.244.0.4:41245 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000423501s
	[INFO] 10.244.0.4:57855 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168501s
	[INFO] 10.244.0.4:59251 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000973s
	[INFO] 10.244.0.4:49224 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000193501s
	[INFO] 10.244.0.4:39630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002705s
	[INFO] 10.244.0.4:33915 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000299901s
	[INFO] 10.244.0.4:44933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000954s
	
	
	==> describe nodes <==
	Name:               ha-267500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-267500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-267500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:08:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-267500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:33:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:33:03 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:33:03 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:33:03 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:33:03 +0000   Mon, 29 Apr 2024 00:08:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.226.61
	  Hostname:    ha-267500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 077cacd754b64c3dad0beeef28749850
	  System UUID:                961ce819-6c1b-c24a-99df-3205dca32605
	  Boot ID:                    bb08693c-1f82-4307-a58c-bdcce00f2d7a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5xln2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-7db6d8ff4d-2d6ct             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     25m
	  kube-system                 coredns-7db6d8ff4d-p7tjz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     25m
	  kube-system                 etcd-ha-267500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         25m
	  kube-system                 kindnet-6pr2b                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	  kube-system                 kube-apiserver-ha-267500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-controller-manager-ha-267500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-proxy-59kz7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-scheduler-ha-267500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-vip-ha-267500                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25m   kube-proxy       
	  Normal  NodeHasSufficientMemory  25m   kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  Starting                 25m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25m   kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m   kubelet          Node ha-267500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25m   kubelet          Node ha-267500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           25m   node-controller  Node ha-267500 event: Registered Node ha-267500 in Controller
	  Normal  NodeReady                25m   kubelet          Node ha-267500 status is now: NodeReady
	
	
	Name:               ha-267500-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-267500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-267500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_28T17_28_02_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:28:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-267500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:33:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:28:32 +0000   Mon, 29 Apr 2024 00:28:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.233.131
	  Hostname:    ha-267500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0562d38ef374b73969ab15fed947e11
	  System UUID:                c94a104a-b670-854e-ac89-f41b3533cc69
	  Boot ID:                    bca10429-bddd-4547-8fb0-c50d93740969
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jxx6x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kindnet-mspbr              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m27s
	  kube-system                 kube-proxy-jcph5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m27s (x2 over 5m27s)  kubelet          Node ha-267500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m27s (x2 over 5m27s)  kubelet          Node ha-267500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m27s (x2 over 5m27s)  kubelet          Node ha-267500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-267500-m03 event: Registered Node ha-267500-m03 in Controller
	  Normal  NodeReady                5m10s                  kubelet          Node ha-267500-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr29 00:06] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.760915] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.419480] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.183676] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[Apr29 00:07] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.112445] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.557599] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.220083] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.252325] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.857578] systemd-fstab-generator[1178]: Ignoring "noauto" option for root device
	[  +0.206645] systemd-fstab-generator[1190]: Ignoring "noauto" option for root device
	[  +0.195057] systemd-fstab-generator[1202]: Ignoring "noauto" option for root device
	[  +0.281554] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[ +11.671296] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.127733] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.851029] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +6.965698] systemd-fstab-generator[1723]: Ignoring "noauto" option for root device
	[  +0.101314] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.149606] kauditd_printk_skb: 67 callbacks suppressed
	[Apr29 00:08] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[ +14.798165] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.098725] kauditd_printk_skb: 29 callbacks suppressed
	[Apr29 00:12] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [1bb77467f58f] <==
	{"level":"warn","ts":"2024-04-29T00:28:01.015001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.747946ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11906267438321686993 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2563 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911183 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T00:28:01.015171Z","caller":"traceutil/trace.go:171","msg":"trace[1066403778] linearizableReadLoop","detail":"{readStateIndex:2839; appliedIndex:2838; }","duration":"166.002504ms","start":"2024-04-29T00:28:00.849156Z","end":"2024-04-29T00:28:01.015158Z","steps":["trace[1066403778] 'read index received'  (duration: 64.927058ms)","trace[1066403778] 'applied index is now lower than readState.Index'  (duration: 101.074346ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T00:28:01.015567Z","caller":"traceutil/trace.go:171","msg":"trace[1444860087] transaction","detail":"{read_only:false; response_revision:2582; number_of_response:1; }","duration":"309.347954ms","start":"2024-04-29T00:28:00.706202Z","end":"2024-04-29T00:28:01.01555Z","steps":["trace[1444860087] 'process raft request'  (duration: 207.946307ms)","trace[1444860087] 'compare'  (duration: 100.659345ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:01.015577Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.380105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/kube-system/bootstrap-token-46antb\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:01.015843Z","caller":"traceutil/trace.go:171","msg":"trace[1574012410] range","detail":"{range_begin:/registry/secrets/kube-system/bootstrap-token-46antb; range_end:; response_count:0; response_revision:2582; }","duration":"166.706906ms","start":"2024-04-29T00:28:00.849128Z","end":"2024-04-29T00:28:01.015834Z","steps":["trace[1574012410] 'agreement among raft nodes before linearized reading'  (duration: 166.065204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:01.015715Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:00.706185Z","time spent":"309.436654ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2563 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911183 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >"}
	{"level":"info","ts":"2024-04-29T00:28:01.13236Z","caller":"traceutil/trace.go:171","msg":"trace[848518735] transaction","detail":"{read_only:false; response_revision:2583; number_of_response:1; }","duration":"106.51056ms","start":"2024-04-29T00:28:01.02575Z","end":"2024-04-29T00:28:01.132261Z","steps":["trace[848518735] 'process raft request'  (duration: 100.002844ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:28:10.719709Z","caller":"traceutil/trace.go:171","msg":"trace[688876790] transaction","detail":"{read_only:false; response_revision:2633; number_of_response:1; }","duration":"131.602022ms","start":"2024-04-29T00:28:10.588085Z","end":"2024-04-29T00:28:10.719687Z","steps":["trace[688876790] 'process raft request'  (duration: 131.335422ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:11.057116Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.908169ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11906267438321687140 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:253b8f272df6da63>","response":"size:41"}
	{"level":"info","ts":"2024-04-29T00:28:11.057309Z","caller":"traceutil/trace.go:171","msg":"trace[730869850] linearizableReadLoop","detail":"{readStateIndex:2894; appliedIndex:2893; }","duration":"310.80146ms","start":"2024-04-29T00:28:10.746493Z","end":"2024-04-29T00:28:11.057294Z","steps":["trace[730869850] 'read index received'  (duration: 118.63939ms)","trace[730869850] 'applied index is now lower than readState.Index'  (duration: 192.16047ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.057392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"310.91436ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:11.057434Z","caller":"traceutil/trace.go:171","msg":"trace[965932074] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2633; }","duration":"310.98056ms","start":"2024-04-29T00:28:10.746443Z","end":"2024-04-29T00:28:11.057424Z","steps":["trace[965932074] 'agreement among raft nodes before linearized reading'  (duration: 310.91126ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:11.057458Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:10.746429Z","time spent":"311.02236ms","remote":"127.0.0.1:52498","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-04-29T00:28:11.057874Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:10.721431Z","time spent":"336.441923ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-04-29T00:28:11.410781Z","caller":"traceutil/trace.go:171","msg":"trace[921900368] linearizableReadLoop","detail":"{readStateIndex:2895; appliedIndex:2894; }","duration":"284.369895ms","start":"2024-04-29T00:28:11.126395Z","end":"2024-04-29T00:28:11.410765Z","steps":["trace[921900368] 'read index received'  (duration: 193.861274ms)","trace[921900368] 'applied index is now lower than readState.Index'  (duration: 90.507421ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.411124Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.711696ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-267500-m03\" ","response":"range_response_count:1 size:2813"}
	{"level":"info","ts":"2024-04-29T00:28:11.41123Z","caller":"traceutil/trace.go:171","msg":"trace[1500780481] range","detail":"{range_begin:/registry/minions/ha-267500-m03; range_end:; response_count:1; response_revision:2634; }","duration":"284.831096ms","start":"2024-04-29T00:28:11.126391Z","end":"2024-04-29T00:28:11.411222Z","steps":["trace[1500780481] 'agreement among raft nodes before linearized reading'  (duration: 284.474795ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:28:11.411809Z","caller":"traceutil/trace.go:171","msg":"trace[1062724437] transaction","detail":"{read_only:false; response_revision:2634; number_of_response:1; }","duration":"351.77576ms","start":"2024-04-29T00:28:11.059046Z","end":"2024-04-29T00:28:11.410821Z","steps":["trace[1062724437] 'process raft request'  (duration: 261.137839ms)","trace[1062724437] 'compare'  (duration: 90.397121ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.412239Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:11.059032Z","time spent":"352.927263ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2582 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911331 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >"}
	{"level":"warn","ts":"2024-04-29T00:28:16.429655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.224744ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:16.429988Z","caller":"traceutil/trace.go:171","msg":"trace[1266991256] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2651; }","duration":"181.540745ms","start":"2024-04-29T00:28:16.248407Z","end":"2024-04-29T00:28:16.429948Z","steps":["trace[1266991256] 'range keys from in-memory index tree'  (duration: 181.210444ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:31:58.46072Z","caller":"traceutil/trace.go:171","msg":"trace[1217074224] transaction","detail":"{read_only:false; response_revision:3091; number_of_response:1; }","duration":"104.985672ms","start":"2024-04-29T00:31:58.355715Z","end":"2024-04-29T00:31:58.460701Z","steps":["trace[1217074224] 'process raft request'  (duration: 67.769676ms)","trace[1217074224] 'compare'  (duration: 36.776895ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T00:32:56.706988Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2573}
	{"level":"info","ts":"2024-04-29T00:32:56.715678Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2573,"took":"8.393022ms","hash":2612196233,"current-db-size-bytes":2490368,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1978368,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-04-29T00:32:56.715794Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2612196233,"revision":2573,"compact-revision":2039}
	
	
	==> kernel <==
	 00:33:29 up 27 min,  0 users,  load average: 0.44, 0.39, 0.36
	Linux ha-267500 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [31e97721c439] <==
	I0429 00:32:26.723693       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:32:36.731258       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:32:36.731314       1 main.go:227] handling current node
	I0429 00:32:36.731327       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:32:36.731335       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:32:46.743128       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:32:46.743173       1 main.go:227] handling current node
	I0429 00:32:46.743187       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:32:46.743194       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:32:56.755554       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:32:56.755879       1 main.go:227] handling current node
	I0429 00:32:56.756018       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:32:56.756029       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:33:06.771274       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:33:06.771318       1 main.go:227] handling current node
	I0429 00:33:06.771330       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:33:06.771338       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:33:16.777573       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:33:16.777679       1 main.go:227] handling current node
	I0429 00:33:16.777695       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:33:16.777703       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:33:26.788105       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:33:26.788159       1 main.go:227] handling current node
	I0429 00:33:26.788172       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:33:26.788190       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e3f1a76ec8d4] <==
	I0429 00:08:00.626826       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 00:08:01.319490       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0429 00:08:02.484116       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0429 00:08:02.484213       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0429 00:08:02.484272       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.5µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0429 00:08:02.485404       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0429 00:08:02.486881       1 timeout.go:142] post-timeout activity - time-elapsed: 2.861712ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0429 00:08:02.642721       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 00:08:02.684736       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 00:08:02.712741       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 00:08:15.229730       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 00:08:15.308254       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 00:23:49.502033       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49293: use of closed network connection
	E0429 00:23:50.824153       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49301: use of closed network connection
	E0429 00:23:51.986308       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49309: use of closed network connection
	E0429 00:24:25.826543       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49329: use of closed network connection
	E0429 00:24:36.281538       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49332: use of closed network connection
	I0429 00:27:56.312329       1 trace.go:236] Trace[1132022318]: "Update" accept:application/json, */*,audit-id:b430ffa2-60e5-4395-a53d-a8ebd619d367,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (29-Apr-2024 00:27:55.724) (total time: 587ms):
	Trace[1132022318]: ["GuaranteedUpdate etcd3" audit-id:b430ffa2-60e5-4395-a53d-a8ebd619d367,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 587ms (00:27:55.725)
	Trace[1132022318]:  ---"Txn call completed" 586ms (00:27:56.312)]
	Trace[1132022318]: [587.55203ms] [587.55203ms] END
	I0429 00:28:11.413223       1 trace.go:236] Trace[768089845]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.27.226.61,type:*v1.Endpoints,resource:apiServerIPInfo (29-Apr-2024 00:28:10.678) (total time: 734ms):
	Trace[768089845]: ---"Transaction prepared" 338ms (00:28:11.058)
	Trace[768089845]: ---"Txn call completed" 354ms (00:28:11.413)
	Trace[768089845]: [734.530496ms] [734.530496ms] END
	
	
	==> kube-controller-manager [988ba6e93dbd] <==
	I0429 00:08:29.407024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="145.8µs"
	I0429 00:08:29.410999       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.9µs"
	I0429 00:08:29.438715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58µs"
	I0429 00:08:29.463289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.4µs"
	I0429 00:08:30.150197       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 00:08:32.178168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.562718ms"
	I0429 00:08:32.178767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="128.296µs"
	I0429 00:08:32.227761       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.198293ms"
	I0429 00:08:32.228518       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.397µs"
	I0429 00:12:22.804126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.965383ms"
	I0429 00:12:22.823038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.733135ms"
	I0429 00:12:22.823277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.2µs"
	I0429 00:12:22.828995       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.1µs"
	I0429 00:12:22.829468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.999µs"
	I0429 00:12:25.591541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.187606ms"
	I0429 00:12:25.591791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="155.1µs"
	I0429 00:28:02.170352       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-267500-m03\" does not exist"
	I0429 00:28:02.230498       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-267500-m03" podCIDRs=["10.244.1.0/24"]
	I0429 00:28:05.393266       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-267500-m03"
	I0429 00:28:19.456843       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-267500-m03"
	I0429 00:28:19.485470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.1µs"
	I0429 00:28:19.487549       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.6µs"
	I0429 00:28:19.505362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.4µs"
	I0429 00:28:22.722440       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.931424ms"
	I0429 00:28:22.722950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.301µs"
	
	
	==> kube-proxy [b505176bff8d] <==
	I0429 00:08:18.378677       1 server_linux.go:69] "Using iptables proxy"
	I0429 00:08:18.445828       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.226.61"]
	I0429 00:08:18.505105       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:08:18.505147       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:08:18.505201       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:08:18.511281       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:08:18.512271       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:08:18.512309       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:08:18.516363       1 config.go:192] "Starting service config controller"
	I0429 00:08:18.517198       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:08:18.517237       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:08:18.517245       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:08:18.524551       1 config.go:319] "Starting node config controller"
	I0429 00:08:18.524570       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:08:18.618172       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 00:08:18.618299       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:08:18.624657       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8e1e8e3ae83a] <==
	W0429 00:07:59.408672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 00:07:59.409434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 00:07:59.614629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.614883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.614630       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.616141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.671538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.671604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.688105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 00:07:59.688348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 00:07:59.699454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 00:07:59.699500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 00:07:59.827114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 00:07:59.827663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 00:07:59.863569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 00:07:59.864226       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 00:07:59.922434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 00:07:59.922488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 00:07:59.934988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 00:07:59.935206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 00:07:59.935823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 00:07:59.936001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 00:07:59.940321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 00:07:59.940831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0429 00:08:01.614591       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 00:29:02 ha-267500 kubelet[2223]: E0429 00:29:02.767197    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:29:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:29:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:29:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:29:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:30:02 ha-267500 kubelet[2223]: E0429 00:30:02.771457    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:30:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:30:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:30:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:30:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:31:02 ha-267500 kubelet[2223]: E0429 00:31:02.770173    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:31:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:31:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:31:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:31:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:32:02 ha-267500 kubelet[2223]: E0429 00:32:02.768148    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:32:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:32:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:32:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:32:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:33:02 ha-267500 kubelet[2223]: E0429 00:33:02.780392    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:33:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:33:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:33:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:33:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:33:22.146884   14132 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500
E0428 17:33:41.005637    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500: (11.5182592s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-267500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-wg44s
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-267500 describe pod busybox-fc5497c4f-wg44s
helpers_test.go:282: (dbg) kubectl --context ha-267500 describe pod busybox-fc5497c4f-wg44s:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-wg44s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bv7kl (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-bv7kl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  6m10s (x5 over 21m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  10s (x3 over 5m23s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (43.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (160.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-267500 node start m02 -v=7 --alsologtostderr: exit status 1 (1m17.6200295s)

                                                
                                                
-- stdout --
	* Starting "ha-267500-m02" control-plane node in "ha-267500" cluster
	* Restarting existing hyperv VM for "ha-267500-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:33:43.043524    7660 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0428 17:33:43.050489    7660 out.go:291] Setting OutFile to fd 1608 ...
	I0428 17:33:43.068242    7660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:33:43.068242    7660 out.go:304] Setting ErrFile to fd 1640...
	I0428 17:33:43.068242    7660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:33:43.086280    7660 mustload.go:65] Loading cluster: ha-267500
	I0428 17:33:43.087094    7660 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:33:43.087528    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:33:45.087725    7660 main.go:141] libmachine: [stdout =====>] : Off
	
	I0428 17:33:45.087725    7660 main.go:141] libmachine: [stderr =====>] : 
	W0428 17:33:45.087725    7660 host.go:58] "ha-267500-m02" host status: Stopped
	I0428 17:33:45.090375    7660 out.go:177] * Starting "ha-267500-m02" control-plane node in "ha-267500" cluster
	I0428 17:33:45.092590    7660 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:33:45.092590    7660 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 17:33:45.092590    7660 cache.go:56] Caching tarball of preloaded images
	I0428 17:33:45.093176    7660 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:33:45.093176    7660 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:33:45.093867    7660 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:33:45.096983    7660 start.go:360] acquireMachinesLock for ha-267500-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:33:45.097064    7660 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500-m02"
	I0428 17:33:45.097064    7660 start.go:96] Skipping create...Using existing machine configuration
	I0428 17:33:45.097064    7660 fix.go:54] fixHost starting: m02
	I0428 17:33:45.097733    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:33:47.142157    7660 main.go:141] libmachine: [stdout =====>] : Off
	
	I0428 17:33:47.142157    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:33:47.142157    7660 fix.go:112] recreateIfNeeded on ha-267500-m02: state=Stopped err=<nil>
	W0428 17:33:47.142157    7660 fix.go:138] unexpected machine state, will restart: <nil>
	I0428 17:33:47.145519    7660 out.go:177] * Restarting existing hyperv VM for "ha-267500-m02" ...
	I0428 17:33:47.148198    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500-m02
	I0428 17:33:50.124578    7660 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:33:50.124578    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:33:50.124578    7660 main.go:141] libmachine: Waiting for host to start...
	I0428 17:33:50.125143    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:33:52.274900    7660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:33:52.274900    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:33:52.274900    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:33:54.701185    7660 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:33:54.701457    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:33:55.711170    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:33:57.830389    7660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:33:57.831304    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:33:57.831304    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:34:00.271426    7660 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:34:00.271426    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:01.284042    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:34:03.378161    7660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:34:03.378161    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:03.378265    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:34:05.825478    7660 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:34:05.825478    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:06.829583    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:34:08.894490    7660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:34:08.894490    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:08.894490    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:34:11.322316    7660 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:34:11.322316    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:12.325884    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:34:14.388505    7660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:34:14.388728    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:14.388931    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:34:16.897657    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135
	
	I0428 17:34:16.897738    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:16.901542    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:34:18.958154    7660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:34:18.958154    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:18.958154    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:34:21.504938    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135
	
	I0428 17:34:21.505162    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:21.505410    7660 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:34:21.507956    7660 machine.go:94] provisionDockerMachine start ...
	I0428 17:34:21.508044    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:34:23.622327    7660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:34:23.622327    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:23.622327    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:34:26.068220    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135
	
	I0428 17:34:26.068591    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:26.075922    7660 main.go:141] libmachine: Using SSH client type: native
	I0428 17:34:26.076676    7660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.135 22 <nil> <nil>}
	I0428 17:34:26.076676    7660 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:34:26.216879    7660 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:34:26.216879    7660 buildroot.go:166] provisioning hostname "ha-267500-m02"
	I0428 17:34:26.216879    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:34:28.212487    7660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:34:28.212487    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:28.212685    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:34:30.641559    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135
	
	I0428 17:34:30.641973    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:30.649405    7660 main.go:141] libmachine: Using SSH client type: native
	I0428 17:34:30.650084    7660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.135 22 <nil> <nil>}
	I0428 17:34:30.650084    7660 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500-m02 && echo "ha-267500-m02" | sudo tee /etc/hostname
	I0428 17:34:30.820037    7660 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500-m02
	
	I0428 17:34:30.820037    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:34:32.875426    7660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:34:32.875482    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:32.875482    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:34:35.326829    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135
	
	I0428 17:34:35.327744    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:35.334031    7660 main.go:141] libmachine: Using SSH client type: native
	I0428 17:34:35.334175    7660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.135 22 <nil> <nil>}
	I0428 17:34:35.334175    7660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:34:35.493567    7660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:34:35.493665    7660 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:34:35.493742    7660 buildroot.go:174] setting up certificates
	I0428 17:34:35.493883    7660 provision.go:84] configureAuth start
	I0428 17:34:35.494033    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:34:37.505061    7660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:34:37.505128    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:37.505128    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:34:40.054198    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135
	
	I0428 17:34:40.054198    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:40.054198    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:34:42.098775    7660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:34:42.098894    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:42.098894    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:34:44.564505    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135
	
	I0428 17:34:44.565547    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:44.565547    7660 provision.go:143] copyHostCerts
	I0428 17:34:44.565643    7660 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:34:44.565643    7660 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:34:44.565643    7660 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:34:44.566400    7660 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:34:44.567113    7660 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:34:44.567838    7660 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:34:44.567907    7660 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:34:44.567907    7660 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:34:44.569238    7660 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:34:44.569238    7660 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:34:44.569238    7660 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:34:44.569779    7660 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:34:44.570746    7660 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500-m02 san=[127.0.0.1 172.27.226.135 ha-267500-m02 localhost minikube]
	I0428 17:34:44.657798    7660 provision.go:177] copyRemoteCerts
	I0428 17:34:44.668756    7660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:34:44.668756    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:34:46.669677    7660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:34:46.669734    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:46.669734    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:34:49.130430    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135
	
	I0428 17:34:49.130685    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:49.130924    7660 sshutil.go:53] new ssh client: &{IP:172.27.226.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:34:49.234515    7660 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5657516s)
	I0428 17:34:49.234515    7660 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:34:49.235193    7660 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:34:49.293671    7660 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:34:49.293671    7660 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0428 17:34:49.340859    7660 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:34:49.341348    7660 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:34:49.391610    7660 provision.go:87] duration metric: took 13.8976681s to configureAuth
	I0428 17:34:49.391690    7660 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:34:49.391914    7660 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:34:49.392486    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:34:51.408246    7660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:34:51.408246    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:51.409270    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:34:53.856818    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135
	
	I0428 17:34:53.857720    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:53.863199    7660 main.go:141] libmachine: Using SSH client type: native
	I0428 17:34:53.863929    7660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.135 22 <nil> <nil>}
	I0428 17:34:53.863929    7660 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:34:54.001596    7660 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:34:54.001722    7660 buildroot.go:70] root file system type: tmpfs
	I0428 17:34:54.002042    7660 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:34:54.002176    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:34:56.053599    7660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:34:56.053886    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:56.053886    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:34:58.508478    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135
	
	I0428 17:34:58.508615    7660 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:34:58.513441    7660 main.go:141] libmachine: Using SSH client type: native
	I0428 17:34:58.513866    7660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.135 22 <nil> <nil>}
	I0428 17:34:58.513950    7660 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:34:58.684520    7660 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:34:58.684670    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state

                                                
                                                
** /stderr **
ha_test.go:422: W0428 17:33:43.043524    7660 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0428 17:33:43.050489    7660 out.go:291] Setting OutFile to fd 1608 ...
I0428 17:33:43.068242    7660 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 17:33:43.068242    7660 out.go:304] Setting ErrFile to fd 1640...
I0428 17:33:43.068242    7660 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 17:33:43.086280    7660 mustload.go:65] Loading cluster: ha-267500
I0428 17:33:43.087094    7660 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0428 17:33:43.087528    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
I0428 17:33:45.087725    7660 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0428 17:33:45.087725    7660 main.go:141] libmachine: [stderr =====>] : 
W0428 17:33:45.087725    7660 host.go:58] "ha-267500-m02" host status: Stopped
I0428 17:33:45.090375    7660 out.go:177] * Starting "ha-267500-m02" control-plane node in "ha-267500" cluster
I0428 17:33:45.092590    7660 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0428 17:33:45.092590    7660 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
I0428 17:33:45.092590    7660 cache.go:56] Caching tarball of preloaded images
I0428 17:33:45.093176    7660 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0428 17:33:45.093176    7660 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0428 17:33:45.093867    7660 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
I0428 17:33:45.096983    7660 start.go:360] acquireMachinesLock for ha-267500-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0428 17:33:45.097064    7660 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500-m02"
I0428 17:33:45.097064    7660 start.go:96] Skipping create...Using existing machine configuration
I0428 17:33:45.097064    7660 fix.go:54] fixHost starting: m02
I0428 17:33:45.097733    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
I0428 17:33:47.142157    7660 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0428 17:33:47.142157    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:33:47.142157    7660 fix.go:112] recreateIfNeeded on ha-267500-m02: state=Stopped err=<nil>
W0428 17:33:47.142157    7660 fix.go:138] unexpected machine state, will restart: <nil>
I0428 17:33:47.145519    7660 out.go:177] * Restarting existing hyperv VM for "ha-267500-m02" ...
I0428 17:33:47.148198    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500-m02
I0428 17:33:50.124578    7660 main.go:141] libmachine: [stdout =====>] : 
I0428 17:33:50.124578    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:33:50.124578    7660 main.go:141] libmachine: Waiting for host to start...
I0428 17:33:50.125143    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
I0428 17:33:52.274900    7660 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:33:52.274900    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:33:52.274900    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
I0428 17:33:54.701185    7660 main.go:141] libmachine: [stdout =====>] : 
I0428 17:33:54.701457    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:33:55.711170    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
I0428 17:33:57.830389    7660 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:33:57.831304    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:33:57.831304    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
I0428 17:34:00.271426    7660 main.go:141] libmachine: [stdout =====>] : 
I0428 17:34:00.271426    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:01.284042    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
I0428 17:34:03.378161    7660 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:34:03.378161    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:03.378265    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
I0428 17:34:05.825478    7660 main.go:141] libmachine: [stdout =====>] : 
I0428 17:34:05.825478    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:06.829583    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
I0428 17:34:08.894490    7660 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:34:08.894490    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:08.894490    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
I0428 17:34:11.322316    7660 main.go:141] libmachine: [stdout =====>] : 
I0428 17:34:11.322316    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:12.325884    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
I0428 17:34:14.388505    7660 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:34:14.388728    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:14.388931    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
I0428 17:34:16.897657    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135

                                                
                                                
I0428 17:34:16.897738    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:16.901542    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
I0428 17:34:18.958154    7660 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:34:18.958154    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:18.958154    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
I0428 17:34:21.504938    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135

                                                
                                                
I0428 17:34:21.505162    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:21.505410    7660 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
I0428 17:34:21.507956    7660 machine.go:94] provisionDockerMachine start ...
I0428 17:34:21.508044    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
I0428 17:34:23.622327    7660 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:34:23.622327    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:23.622327    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
I0428 17:34:26.068220    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135

                                                
                                                
I0428 17:34:26.068591    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:26.075922    7660 main.go:141] libmachine: Using SSH client type: native
I0428 17:34:26.076676    7660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.135 22 <nil> <nil>}
I0428 17:34:26.076676    7660 main.go:141] libmachine: About to run SSH command:
hostname
I0428 17:34:26.216879    7660 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0428 17:34:26.216879    7660 buildroot.go:166] provisioning hostname "ha-267500-m02"
I0428 17:34:26.216879    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
I0428 17:34:28.212487    7660 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:34:28.212487    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:28.212685    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
I0428 17:34:30.641559    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135

                                                
                                                
I0428 17:34:30.641973    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:30.649405    7660 main.go:141] libmachine: Using SSH client type: native
I0428 17:34:30.650084    7660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.135 22 <nil> <nil>}
I0428 17:34:30.650084    7660 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-267500-m02 && echo "ha-267500-m02" | sudo tee /etc/hostname
I0428 17:34:30.820037    7660 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500-m02

                                                
                                                
I0428 17:34:30.820037    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
I0428 17:34:32.875426    7660 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:34:32.875482    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:32.875482    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
I0428 17:34:35.326829    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135

                                                
                                                
I0428 17:34:35.327744    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:35.334031    7660 main.go:141] libmachine: Using SSH client type: native
I0428 17:34:35.334175    7660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.135 22 <nil> <nil>}
I0428 17:34:35.334175    7660 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-267500-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-267500-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I0428 17:34:35.493567    7660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0428 17:34:35.493665    7660 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
I0428 17:34:35.493742    7660 buildroot.go:174] setting up certificates
I0428 17:34:35.493883    7660 provision.go:84] configureAuth start
I0428 17:34:35.494033    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
I0428 17:34:37.505061    7660 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:34:37.505128    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:37.505128    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
I0428 17:34:40.054198    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135

                                                
                                                
I0428 17:34:40.054198    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:40.054198    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
I0428 17:34:42.098775    7660 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:34:42.098894    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:42.098894    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
I0428 17:34:44.564505    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135

                                                
                                                
I0428 17:34:44.565547    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:44.565547    7660 provision.go:143] copyHostCerts
I0428 17:34:44.565643    7660 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
I0428 17:34:44.565643    7660 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
I0428 17:34:44.565643    7660 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
I0428 17:34:44.566400    7660 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
I0428 17:34:44.567113    7660 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
I0428 17:34:44.567838    7660 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
I0428 17:34:44.567907    7660 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
I0428 17:34:44.567907    7660 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
I0428 17:34:44.569238    7660 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
I0428 17:34:44.569238    7660 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
I0428 17:34:44.569238    7660 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
I0428 17:34:44.569779    7660 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
I0428 17:34:44.570746    7660 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500-m02 san=[127.0.0.1 172.27.226.135 ha-267500-m02 localhost minikube]
I0428 17:34:44.657798    7660 provision.go:177] copyRemoteCerts
I0428 17:34:44.668756    7660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0428 17:34:44.668756    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
I0428 17:34:46.669677    7660 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:34:46.669734    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:46.669734    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
I0428 17:34:49.130430    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135

                                                
                                                
I0428 17:34:49.130685    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:49.130924    7660 sshutil.go:53] new ssh client: &{IP:172.27.226.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
I0428 17:34:49.234515    7660 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5657516s)
I0428 17:34:49.234515    7660 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0428 17:34:49.235193    7660 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0428 17:34:49.293671    7660 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I0428 17:34:49.293671    7660 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
I0428 17:34:49.340859    7660 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0428 17:34:49.341348    7660 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0428 17:34:49.391610    7660 provision.go:87] duration metric: took 13.8976681s to configureAuth
I0428 17:34:49.391690    7660 buildroot.go:189] setting minikube options for container-runtime
I0428 17:34:49.391914    7660 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0428 17:34:49.392486    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
I0428 17:34:51.408246    7660 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:34:51.408246    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:51.409270    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
I0428 17:34:53.856818    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135

                                                
                                                
I0428 17:34:53.857720    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:53.863199    7660 main.go:141] libmachine: Using SSH client type: native
I0428 17:34:53.863929    7660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.135 22 <nil> <nil>}
I0428 17:34:53.863929    7660 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0428 17:34:54.001596    7660 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0428 17:34:54.001722    7660 buildroot.go:70] root file system type: tmpfs
I0428 17:34:54.002042    7660 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0428 17:34:54.002176    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
I0428 17:34:56.053599    7660 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0428 17:34:56.053886    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:56.053886    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
I0428 17:34:58.508478    7660 main.go:141] libmachine: [stdout =====>] : 172.27.226.135

                                                
                                                
I0428 17:34:58.508615    7660 main.go:141] libmachine: [stderr =====>] : 
I0428 17:34:58.513441    7660 main.go:141] libmachine: Using SSH client type: native
I0428 17:34:58.513866    7660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.135 22 <nil> <nil>}
I0428 17:34:58.513950    7660 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0428 17:34:58.684520    7660 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0428 17:34:58.684670    7660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-windows-amd64.exe -p ha-267500 node start m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr: context deadline exceeded (255.7µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr: context deadline exceeded (37µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr: context deadline exceeded (256.6µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr: context deadline exceeded (102.8µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr: context deadline exceeded (0s)
E0428 17:35:36.428143    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:432: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-267500 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-267500 -n ha-267500: (11.711439s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-267500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-267500 logs -n 25: (7.8915859s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:22 PDT | 28 Apr 24 17:22 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT | 28 Apr 24 17:23 PDT |
	|         | busybox-fc5497c4f-5xln2 -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:23 PDT |                     |
	|         | busybox-fc5497c4f-wg44s -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- get pods -o          | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT | 28 Apr 24 17:24 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT | 28 Apr 24 17:24 PDT |
	|         | busybox-fc5497c4f-5xln2              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-5xln2 -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.224.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-jxx6x              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-267500 -- exec                 | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:24 PDT |                     |
	|         | busybox-fc5497c4f-wg44s              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-267500 -v=7                | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:25 PDT | 28 Apr 24 17:28 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	| node    | ha-267500 node stop m02 -v=7         | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:31 PDT | 28 Apr 24 17:32 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	| node    | ha-267500 node start m02 -v=7        | ha-267500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 17:33 PDT |                     |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 17:05:00
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 17:05:00.635889   15128 out.go:291] Setting OutFile to fd 1448 ...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.636883   15128 out.go:304] Setting ErrFile to fd 980...
	I0428 17:05:00.636883   15128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 17:05:00.660527   15128 out.go:298] Setting JSON to false
	I0428 17:05:00.664060   15128 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6543,"bootTime":1714342556,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 17:05:00.664060   15128 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 17:05:00.669160   15128 out.go:177] * [ha-267500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 17:05:00.673143   15128 notify.go:220] Checking for updates...
	I0428 17:05:00.675298   15128 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:05:00.677914   15128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 17:05:00.680526   15128 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 17:05:00.682871   15128 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 17:05:00.686326   15128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 17:05:00.689521   15128 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 17:05:05.728109   15128 out.go:177] * Using the hyperv driver based on user configuration
	I0428 17:05:05.733726   15128 start.go:297] selected driver: hyperv
	I0428 17:05:05.733726   15128 start.go:901] validating driver "hyperv" against <nil>
	I0428 17:05:05.733888   15128 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 17:05:05.779166   15128 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 17:05:05.780739   15128 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 17:05:05.780739   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:05:05.780739   15128 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0428 17:05:05.780739   15128 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0428 17:05:05.780739   15128 start.go:340] cluster config:
	{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:05:05.781443   15128 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 17:05:05.786272   15128 out.go:177] * Starting "ha-267500" primary control-plane node in "ha-267500" cluster
	I0428 17:05:05.789365   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:05:05.790343   15128 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 17:05:05.790343   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:05:05.790810   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:05:05.791000   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:05:05.791210   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:05:05.791210   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json: {Name:mk9d04dce876aeea74569e2a12d8158542a180a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:360] acquireMachinesLock for ha-267500: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:05:05.792798   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500"
	I0428 17:05:05.793473   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:05:05.793473   15128 start.go:125] createHost starting for "" (driver="hyperv")
	I0428 17:05:05.798458   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:05:05.798458   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:05:05.799075   15128 client.go:168] LocalClient.Create starting
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799227   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:05:05.799932   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:05:07.765342   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:05:07.765366   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:07.765483   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:05:09.466609   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:09.466685   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:10.942750   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:10.942832   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:14.306457   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:14.309202   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:05:14.797607   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: Creating VM...
	I0428 17:05:14.890374   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:05:17.596457   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:05:17.596534   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:17.596629   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:05:17.596740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:05:19.370841   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:19.370912   15128 main.go:141] libmachine: Creating VHD
	I0428 17:05:19.370912   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:05:22.987163   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6323F08D-1941-41F6-AECD-59FDB38477C4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:05:22.987787   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:22.987787   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:05:22.987950   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:05:22.997062   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:05:26.067081   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:26.067395   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:26.067482   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd' -SizeBytes 20000MB
	I0428 17:05:28.607147   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:28.607695   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-267500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:05:32.186256   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:32.186340   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500 -DynamicMemoryEnabled $false
	I0428 17:05:34.304828   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:34.304890   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500 -Count 2
	I0428 17:05:36.364288   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:36.365155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:36.365244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\boot2docker.iso'
	I0428 17:05:38.788294   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:38.789017   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\disk.vhd'
	I0428 17:05:41.250474   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:41.251513   15128 main.go:141] libmachine: Starting VM...
	I0428 17:05:41.251660   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:44.257162   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:05:44.257162   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:46.422511   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:48.796976   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:48.797051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:49.812421   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:51.911514   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:51.912240   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:51.912333   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:54.389553   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:54.389603   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:55.396985   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:05:57.531696   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:05:57.532241   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:05:59.865311   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:05:59.865354   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:00.867371   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:02.918643   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:06:05.299379   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:06.311485   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:08.432715   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:10.915736   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:10.916779   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:10.916848   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:12.945722   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:06:12.945722   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:14.977125   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:14.977649   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:17.397233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:17.403860   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:17.413822   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:17.413822   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:06:17.548827   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:06:17.549001   15128 buildroot.go:166] provisioning hostname "ha-267500"
	I0428 17:06:17.549001   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:19.531965   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:21.963707   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:21.963891   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:21.969614   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:21.970234   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:21.970287   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500 && echo "ha-267500" | sudo tee /etc/hostname
	I0428 17:06:22.125673   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500
	
	I0428 17:06:22.125673   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:24.116092   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:24.116148   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:26.498042   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:26.498298   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:26.504621   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:26.505426   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:26.505426   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:06:26.654593   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:06:26.654745   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:06:26.654745   15128 buildroot.go:174] setting up certificates
	I0428 17:06:26.654878   15128 provision.go:84] configureAuth start
	I0428 17:06:26.654974   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:28.642768   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:28.643033   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:31.047002   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:31.047712   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:33.032385   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:33.033114   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:33.033244   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:35.470487   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:35.470551   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:35.470602   15128 provision.go:143] copyHostCerts
	I0428 17:06:35.470602   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:06:35.470602   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:06:35.470602   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:06:35.471409   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:06:35.472302   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:06:35.472302   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:06:35.472302   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:06:35.474368   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:06:35.475508   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:06:35.475508   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:06:35.477084   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500 san=[127.0.0.1 172.27.226.61 ha-267500 localhost minikube]
	I0428 17:06:35.561808   15128 provision.go:177] copyRemoteCerts
	I0428 17:06:35.577487   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:06:35.577487   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:37.563943   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:37.564802   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:40.009310   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:40.009619   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:06:40.122812   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5453174s)
	I0428 17:06:40.122812   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:06:40.124516   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:06:40.170921   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:06:40.171551   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0428 17:06:40.219603   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:06:40.219603   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:06:40.266084   15128 provision.go:87] duration metric: took 13.6111193s to configureAuth
	I0428 17:06:40.266084   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:06:40.266857   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:06:40.267021   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:42.241538   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:42.241914   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:44.632119   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:44.637923   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:44.637923   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:44.637923   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:06:44.774113   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:06:44.774113   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:06:44.774113   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:06:44.774650   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:46.777708   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:46.778317   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:46.778401   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:49.181965   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:49.187437   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:49.187970   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:49.188102   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:06:49.338418   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:06:49.339201   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:51.331459   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:51.331634   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:06:53.755706   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:53.762358   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:06:53.763024   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:06:53.763024   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:06:55.964469   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:06:55.964469   15128 machine.go:97] duration metric: took 43.0186778s to provisionDockerMachine
	I0428 17:06:55.964469   15128 client.go:171] duration metric: took 1m50.1652174s to LocalClient.Create
	I0428 17:06:55.964469   15128 start.go:167] duration metric: took 1m50.1658343s to libmachine.API.Create "ha-267500"
	I0428 17:06:55.965115   15128 start.go:293] postStartSetup for "ha-267500" (driver="hyperv")
	I0428 17:06:55.965216   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:06:55.979546   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:06:55.979546   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:06:57.968316   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:06:57.969137   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:06:57.969264   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:00.415449   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:00.415502   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:00.415502   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:00.529139   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5495858s)
	I0428 17:07:00.542143   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:07:00.550032   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:07:00.550213   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:07:00.550570   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:07:00.551284   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:07:00.551284   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:07:00.565509   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:07:00.584743   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:07:00.629457   15128 start.go:296] duration metric: took 4.6642336s for postStartSetup
	I0428 17:07:00.635014   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:02.626728   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:02.627487   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:02.627874   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:05.092989   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:05.093104   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:05.093386   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:07:05.096398   15128 start.go:128] duration metric: took 1m59.3027333s to createHost
	I0428 17:07:05.096398   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:07.065139   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:07.066155   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:07.066393   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:09.551453   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:09.552365   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:09.558305   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:09.559011   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:09.559011   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:07:09.695211   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349229.688972111
	
	I0428 17:07:09.695211   15128 fix.go:216] guest clock: 1714349229.688972111
	I0428 17:07:09.695293   15128 fix.go:229] Guest: 2024-04-28 17:07:09.688972111 -0700 PDT Remote: 2024-04-28 17:07:05.096398 -0700 PDT m=+124.563135001 (delta=4.592574111s)
	I0428 17:07:09.695407   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:11.789797   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:11.789847   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:11.789990   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:14.233375   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:14.240619   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:07:14.240815   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.226.61 22 <nil> <nil>}
	I0428 17:07:14.240815   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349229
	I0428 17:07:14.381527   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:07:09 UTC 2024
	
	I0428 17:07:14.381591   15128 fix.go:236] clock set: Mon Apr 29 00:07:09 UTC 2024
	 (err=<nil>)
	I0428 17:07:14.381591   15128 start.go:83] releasing machines lock for "ha-267500", held for 2m8.5881066s
	I0428 17:07:14.381888   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:16.379116   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:16.379233   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:18.836854   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:18.842518   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:07:18.842698   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:18.852567   15128 ssh_runner.go:195] Run: cat /version.json
	I0428 17:07:18.853571   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.910892   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.911012   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:20.912913   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:07:20.913115   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:20.913211   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:07:23.515321   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.515423   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.515870   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:07:23.545848   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:07:23.545848   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: cat /version.json: (4.8814384s)
	I0428 17:07:23.734013   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8914872s)
	I0428 17:07:23.747746   15128 ssh_runner.go:195] Run: systemctl --version
	I0428 17:07:23.771255   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 17:07:23.781524   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:07:23.793701   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:07:23.822613   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:07:23.822613   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:23.822613   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:23.866813   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:07:23.903238   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:07:23.922743   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:07:23.934150   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:07:23.963653   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:23.994818   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:07:24.027248   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:07:24.060207   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:07:24.094263   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:07:24.140407   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:07:24.173847   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:07:24.204942   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:07:24.241686   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:07:24.271540   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:24.469049   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:07:24.498779   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:07:24.511314   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:07:24.547731   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.585442   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:07:24.632453   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:07:24.665555   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.704256   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:07:24.766295   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:07:24.792824   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:07:24.839067   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:07:24.857950   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:07:24.877113   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:07:24.928235   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:07:25.145493   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:07:25.342459   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:07:25.342632   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:07:25.392872   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:25.606530   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:28.159251   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5517925s)
	I0428 17:07:28.171034   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0428 17:07:28.211210   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.251460   15128 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0428 17:07:28.457673   15128 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0428 17:07:28.655447   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:28.858401   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0428 17:07:28.905418   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 17:07:28.943568   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:29.150079   15128 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0428 17:07:29.264527   15128 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0428 17:07:29.277774   15128 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0428 17:07:29.287734   15128 start.go:562] Will wait 60s for crictl version
	I0428 17:07:29.298726   15128 ssh_runner.go:195] Run: which crictl
	I0428 17:07:29.316760   15128 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 17:07:29.366950   15128 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0428 17:07:29.376977   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.418646   15128 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 17:07:29.453698   15128 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0428 17:07:29.453698   15128 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0428 17:07:29.458039   15128 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d9:46:b5 Flags:up|broadcast|multicast|running}
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: fe80::ddcc:c7f1:f829:ae2f/64
	I0428 17:07:29.460811   15128 ip.go:210] interface addr: 172.27.224.1/20
	I0428 17:07:29.473489   15128 ssh_runner.go:195] Run: grep 172.27.224.1	host.minikube.internal$ /etc/hosts
	I0428 17:07:29.479885   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:29.514603   15128 kubeadm.go:877] updating cluster {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 17:07:29.514603   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:07:29.523620   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:29.550369   15128 docker.go:685] Got preloaded images: 
	I0428 17:07:29.550483   15128 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0428 17:07:29.562702   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:29.593952   15128 ssh_runner.go:195] Run: which lz4
	I0428 17:07:29.600117   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0428 17:07:29.613555   15128 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0428 17:07:29.619890   15128 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0428 17:07:29.619890   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0428 17:07:31.519069   15128 docker.go:649] duration metric: took 1.9189486s to copy over tarball
	I0428 17:07:31.533069   15128 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0428 17:07:40.472773   15128 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9396898s)
	I0428 17:07:40.472925   15128 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0428 17:07:40.541351   15128 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 17:07:40.567273   15128 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0428 17:07:40.619221   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:40.837523   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:07:44.196770   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3592418s)
	I0428 17:07:44.207767   15128 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 17:07:44.237423   15128 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0428 17:07:44.237484   15128 cache_images.go:84] Images are preloaded, skipping loading
	I0428 17:07:44.237484   15128 kubeadm.go:928] updating node { 172.27.226.61 8443 v1.30.0 docker true true} ...
	I0428 17:07:44.237484   15128 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-267500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.226.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 17:07:44.246763   15128 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0428 17:07:44.282127   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:07:44.282216   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:07:44.282216   15128 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 17:07:44.282351   15128 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.226.61 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-267500 NodeName:ha-267500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.226.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.226.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 17:07:44.282455   15128 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.226.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-267500"
	  kubeletExtraArgs:
	    node-ip: 172.27.226.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.226.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 17:07:44.282455   15128 kube-vip.go:111] generating kube-vip config ...
	I0428 17:07:44.297487   15128 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0428 17:07:44.321501   15128 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0428 17:07:44.322489   15128 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0428 17:07:44.337281   15128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 17:07:44.356448   15128 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 17:07:44.368828   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0428 17:07:44.388733   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0428 17:07:44.419285   15128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 17:07:44.454529   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0428 17:07:44.492910   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0428 17:07:44.535119   15128 ssh_runner.go:195] Run: grep 172.27.239.254	control-plane.minikube.internal$ /etc/hosts
	I0428 17:07:44.544353   15128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 17:07:44.584071   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:07:44.784658   15128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 17:07:44.813138   15128 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500 for IP: 172.27.226.61
	I0428 17:07:44.813138   15128 certs.go:194] generating shared ca certs ...
	I0428 17:07:44.813138   15128 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:44.814022   15128 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0428 17:07:44.814402   15128 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0428 17:07:44.814630   15128 certs.go:256] generating profile certs ...
	I0428 17:07:44.815376   15128 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key
	I0428 17:07:44.815452   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt with IP's: []
	I0428 17:07:45.207682   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt ...
	I0428 17:07:45.207682   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.crt: {Name:mkad69168dad75f83e0efa34e0b67056be851f25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.209661   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key ...
	I0428 17:07:45.209661   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\client.key: {Name:mkb880ba41d02f89477ac0bc036a3238bb214c19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.210642   15128 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3
	I0428 17:07:45.211691   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.226.61 172.27.239.254]
	I0428 17:07:45.272240   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 ...
	I0428 17:07:45.272240   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3: {Name:mk99fb8942eac42f7e59971118a5e983aa693542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.273362   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 ...
	I0428 17:07:45.273362   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3: {Name:mkdcebf54b68db40ea28398d3bc9d7030e2380c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.274711   15128 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt
	I0428 17:07:45.286842   15128 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key.613c3df3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key
	I0428 17:07:45.287930   15128 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key
	I0428 17:07:45.288916   15128 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt with IP's: []
	I0428 17:07:45.392345   15128 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt ...
	I0428 17:07:45.392345   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt: {Name:mk043c6e778c0a46cac3b2815bc508f265aae077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.394630   15128 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key ...
	I0428 17:07:45.394630   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key: {Name:mk9cbeba2bc7745cd3561dc98b61ab1be7e0e2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:07:45.395971   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0428 17:07:45.396297   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 17:07:45.396701   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 17:07:45.396840   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 17:07:45.396982   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 17:07:45.397123   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 17:07:45.404414   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 17:07:45.405312   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem (1338 bytes)
	W0428 17:07:45.405975   15128 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228_empty.pem, impossibly tiny 0 bytes
	I0428 17:07:45.406015   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0428 17:07:45.406268   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0428 17:07:45.406623   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0428 17:07:45.406886   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0428 17:07:45.407157   15128 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem (1708 bytes)
	I0428 17:07:45.407157   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /usr/share/ca-certificates/32282.pem
	I0428 17:07:45.407872   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:45.408049   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem -> /usr/share/ca-certificates/3228.pem
	I0428 17:07:45.408290   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 17:07:45.465598   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0428 17:07:45.514624   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 17:07:45.563309   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 17:07:45.610689   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0428 17:07:45.668205   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0428 17:07:45.709224   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 17:07:45.760227   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0428 17:07:45.808948   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /usr/share/ca-certificates/32282.pem (1708 bytes)
	I0428 17:07:45.867908   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 17:07:45.915616   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem --> /usr/share/ca-certificates/3228.pem (1338 bytes)
	I0428 17:07:45.964791   15128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 17:07:46.023214   15128 ssh_runner.go:195] Run: openssl version
	I0428 17:07:46.048823   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32282.pem && ln -fs /usr/share/ca-certificates/32282.pem /etc/ssl/certs/32282.pem"
	I0428 17:07:46.088573   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.097176   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.109096   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32282.pem
	I0428 17:07:46.132635   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32282.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 17:07:46.166258   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 17:07:46.204585   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.212881   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.228291   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 17:07:46.251359   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 17:07:46.286250   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3228.pem && ln -fs /usr/share/ca-certificates/3228.pem /etc/ssl/certs/3228.pem"
	I0428 17:07:46.330437   15128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.337213   15128 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.348616   15128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3228.pem
	I0428 17:07:46.369695   15128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3228.pem /etc/ssl/certs/51391683.0"
	I0428 17:07:46.404629   15128 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 17:07:46.416103   15128 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 17:07:46.416103   15128 kubeadm.go:391] StartCluster: {Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 17:07:46.427776   15128 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 17:07:46.462126   15128 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 17:07:46.492998   15128 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 17:07:46.525017   15128 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 17:07:46.543389   15128 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 17:07:46.543449   15128 kubeadm.go:156] found existing configuration files:
	
	I0428 17:07:46.559558   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 17:07:46.576906   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 17:07:46.591547   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 17:07:46.622617   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 17:07:46.643274   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 17:07:46.657479   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 17:07:46.687575   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.704724   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 17:07:46.717169   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 17:07:46.749254   15128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 17:07:46.767247   15128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 17:07:46.779268   15128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 17:07:46.798138   15128 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0428 17:07:47.295492   15128 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0428 17:08:03.206037   15128 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0428 17:08:03.206217   15128 kubeadm.go:309] [preflight] Running pre-flight checks
	I0428 17:08:03.206547   15128 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0428 17:08:03.206720   15128 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0428 17:08:03.207017   15128 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0428 17:08:03.207166   15128 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 17:08:03.211078   15128 out.go:204]   - Generating certificates and keys ...
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0428 17:08:03.211427   15128 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0428 17:08:03.212047   15128 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0428 17:08:03.212253   15128 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0428 17:08:03.212452   15128 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0428 17:08:03.212808   15128 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-267500 localhost] and IPs [172.27.226.61 127.0.0.1 ::1]
	I0428 17:08:03.213396   15128 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 17:08:03.213747   15128 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 17:08:03.214403   15128 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 17:08:03.214647   15128 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 17:08:03.214647   15128 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 17:08:03.217496   15128 out.go:204]   - Booting up control plane ...
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 17:08:03.217496   15128 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 17:08:03.218523   15128 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 17:08:03.218673   15128 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0428 17:08:03.218845   15128 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0428 17:08:03.219109   15128 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002004724s
	I0428 17:08:03.219380   15128 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0428 17:08:03.219512   15128 kubeadm.go:309] [api-check] The API server is healthy after 9.018382318s
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0428 17:08:03.219547   15128 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0428 17:08:03.219547   15128 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0428 17:08:03.219547   15128 kubeadm.go:309] [mark-control-plane] Marking the node ha-267500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0428 17:08:03.219547   15128 kubeadm.go:309] [bootstrap-token] Using token: o2t0fz.gqoxv8rhmbtgnafl
	I0428 17:08:03.222077   15128 out.go:204]   - Configuring RBAC rules ...
	I0428 17:08:03.223255   15128 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0428 17:08:03.223390   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0428 17:08:03.223700   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0428 17:08:03.224022   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0428 17:08:03.224356   15128 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0428 17:08:03.224673   15128 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0428 17:08:03.224822   15128 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0428 17:08:03.224822   15128 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0428 17:08:03.224822   15128 kubeadm.go:309] 
	I0428 17:08:03.224822   15128 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0428 17:08:03.225393   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.225532   15128 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0428 17:08:03.225532   15128 kubeadm.go:309] 
	I0428 17:08:03.226084   15128 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0428 17:08:03.226084   15128 kubeadm.go:309] 
	I0428 17:08:03.226252   15128 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0428 17:08:03.226279   15128 kubeadm.go:309] 
	I0428 17:08:03.226368   15128 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0428 17:08:03.226368   15128 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0428 17:08:03.226368   15128 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0428 17:08:03.226368   15128 kubeadm.go:309] 
	I0428 17:08:03.226941   15128 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0428 17:08:03.227102   15128 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0428 17:08:03.227102   15128 kubeadm.go:309] 
	I0428 17:08:03.227370   15128 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c \
	I0428 17:08:03.227509   15128 kubeadm.go:309] 	--control-plane 
	I0428 17:08:03.227509   15128 kubeadm.go:309] 
	I0428 17:08:03.227814   15128 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0428 17:08:03.227814   15128 kubeadm.go:309] 
	I0428 17:08:03.228020   15128 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token o2t0fz.gqoxv8rhmbtgnafl \
	I0428 17:08:03.228020   15128 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c 
	I0428 17:08:03.228020   15128 cni.go:84] Creating CNI manager for ""
	I0428 17:08:03.228020   15128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 17:08:03.230920   15128 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0428 17:08:03.245586   15128 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0428 17:08:03.254991   15128 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0428 17:08:03.255049   15128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0428 17:08:03.307618   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0428 17:08:04.087321   15128 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 17:08:04.101185   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-267500 minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=ha-267500 minikube.k8s.io/primary=true
	I0428 17:08:04.110392   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.127454   15128 ops.go:34] apiserver oom_adj: -16
	I0428 17:08:04.338961   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:04.853452   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.339051   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:05.843300   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.345394   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:06.842588   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.347466   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:07.845426   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.343954   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:08.844666   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.346016   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:09.847106   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.346157   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:10.852073   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.350599   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:11.851124   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.339498   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:12.839469   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.341674   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:13.844363   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.340478   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:14.840892   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.351020   15128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 17:08:15.542789   15128 kubeadm.go:1107] duration metric: took 11.4553488s to wait for elevateKubeSystemPrivileges
	W0428 17:08:15.542884   15128 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0428 17:08:15.542948   15128 kubeadm.go:393] duration metric: took 29.1267984s to StartCluster
	I0428 17:08:15.542948   15128 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.543147   15128 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:15.545087   15128 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 17:08:15.546714   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0428 17:08:15.546792   15128 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:15.546862   15128 start.go:240] waiting for startup goroutines ...
	I0428 17:08:15.546921   15128 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0428 17:08:15.547043   15128 addons.go:69] Setting storage-provisioner=true in profile "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:234] Setting addon storage-provisioner=true in "ha-267500"
	I0428 17:08:15.547043   15128 addons.go:69] Setting default-storageclass=true in profile "ha-267500"
	I0428 17:08:15.547186   15128 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-267500"
	I0428 17:08:15.547186   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:15.547418   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.548408   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:15.760123   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0428 17:08:16.117515   15128 start.go:946] {"host.minikube.internal": 172.27.224.1} host record injected into CoreDNS's ConfigMap
	I0428 17:08:17.727218   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:17.728064   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:17.731020   15128 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 17:08:17.728718   15128 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 17:08:17.731866   15128 kapi.go:59] client config for ha-267500: &rest.Config{Host:"https://172.27.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-267500\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 17:08:17.733765   15128 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:17.733849   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0428 17:08:17.733849   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:17.735131   15128 cert_rotation.go:137] Starting client certificate rotation controller
	I0428 17:08:17.735131   15128 addons.go:234] Setting addon default-storageclass=true in "ha-267500"
	I0428 17:08:17.735756   15128 host.go:66] Checking if "ha-267500" exists ...
	I0428 17:08:17.736495   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.022150   15128 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:20.022150   15128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0428 17:08:20.022150   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500 ).state
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:20.023713   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:20.024648   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.176019   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:08:22.176993   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.177104   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500 ).networkadapters[0]).ipaddresses[0]
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:22.649286   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:22.649653   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:22.838833   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 17:08:23.942043   15128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1032083s)
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stdout =====>] : 172.27.226.61
	
	I0428 17:08:24.736051   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:24.736869   15128 sshutil.go:53] new ssh client: &{IP:172.27.226.61 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500\id_rsa Username:docker}
	I0428 17:08:24.878922   15128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0428 17:08:25.036824   15128 round_trippers.go:463] GET https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0428 17:08:25.036824   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.036824   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.036824   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.047850   15128 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0428 17:08:25.050270   15128 round_trippers.go:463] PUT https://172.27.239.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0428 17:08:25.050270   15128 round_trippers.go:469] Request Headers:
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Accept: application/json, */*
	I0428 17:08:25.050270   15128 round_trippers.go:473]     Content-Type: application/json
	I0428 17:08:25.050270   15128 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 17:08:25.054895   15128 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 17:08:25.058644   15128 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0428 17:08:25.062323   15128 addons.go:505] duration metric: took 9.5154456s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0428 17:08:25.062323   15128 start.go:245] waiting for cluster config update ...
	I0428 17:08:25.062323   15128 start.go:254] writing updated cluster config ...
	I0428 17:08:25.064855   15128 out.go:177] 
	I0428 17:08:25.074876   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:08:25.074876   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.081680   15128 out.go:177] * Starting "ha-267500-m02" control-plane node in "ha-267500" cluster
	I0428 17:08:25.084831   15128 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 17:08:25.084949   15128 cache.go:56] Caching tarball of preloaded images
	I0428 17:08:25.085245   15128 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 17:08:25.085467   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 17:08:25.085668   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:08:25.089909   15128 start.go:360] acquireMachinesLock for ha-267500-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 17:08:25.089909   15128 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-267500-m02"
	I0428 17:08:25.089909   15128 start.go:93] Provisioning new machine with config: &{Name:ha-267500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-267500 Namespace:default APIServerHAVIP:172.27.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.226.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 17:08:25.089909   15128 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0428 17:08:25.092669   15128 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 17:08:25.092669   15128 start.go:159] libmachine.API.Create for "ha-267500" (driver="hyperv")
	I0428 17:08:25.092669   15128 client.go:168] LocalClient.Create starting
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 17:08:25.093686   15128 main.go:141] libmachine: Decoding PEM data...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: Parsing certificate...
	I0428 17:08:25.094755   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 17:08:26.932082   15128 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 17:08:26.932249   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:26.932469   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 17:08:28.625007   15128 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 17:08:28.625741   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:28.625836   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:30.145128   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:30.145193   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:30.145352   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:33.641047   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:33.641341   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:33.643919   15128 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 17:08:34.107074   15128 main.go:141] libmachine: Creating SSH key...
	I0428 17:08:34.283136   15128 main.go:141] libmachine: Creating VM...
	I0428 17:08:34.284168   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 17:08:37.085226   15128 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:37.085497   15128 main.go:141] libmachine: Using switch "Default Switch"
	I0428 17:08:37.085497   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:38.799740   15128 main.go:141] libmachine: Creating VHD
	I0428 17:08:38.799740   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1C4811B2-F108-4C17-8C85-240087500FFB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 17:08:42.432588   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing magic tar header
	I0428 17:08:42.432588   15128 main.go:141] libmachine: Writing SSH key tar header
	I0428 17:08:42.443176   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 17:08:45.530814   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:45.531090   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd' -SizeBytes 20000MB
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:47.993046   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 17:08:51.507051   15128 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-267500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 17:08:51.507121   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:51.507184   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-267500-m02 -DynamicMemoryEnabled $false
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:53.623610   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:53.623959   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-267500-m02 -Count 2
	I0428 17:08:55.746706   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:55.747282   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:55.747376   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\boot2docker.iso'
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:08:58.230232   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:08:58.231298   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-267500-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\disk.vhd'
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:00.819111   15128 main.go:141] libmachine: Starting VM...
	I0428 17:09:00.819246   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-267500-m02
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:03.833885   15128 main.go:141] libmachine: Waiting for host to start...
	I0428 17:09:03.833885   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:06.036447   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:08.535107   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:08.535676   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:09.540110   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:11.730252   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:11.730767   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:11.730896   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:14.267320   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:14.267920   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:15.278102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:17.429662   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:17.430272   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:19.872667   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:19.873239   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:20.874059   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:23.049283   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:23.049554   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:25.483021   15128 main.go:141] libmachine: [stdout =====>] : 
	I0428 17:09:25.483840   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:26.497330   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:28.593026   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:28.593193   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:31.092267   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:31.092830   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:33.155893   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:33.156190   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:33.156190   15128 machine.go:94] provisionDockerMachine start ...
	I0428 17:09:33.156343   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:35.235080   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:37.708958   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:37.709094   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:37.715262   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:37.715453   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:37.715453   15128 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 17:09:37.838307   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 17:09:37.838307   15128 buildroot.go:166] provisioning hostname "ha-267500-m02"
	I0428 17:09:37.838307   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:39.845337   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:39.845507   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:39.845582   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:42.372033   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:42.372654   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:42.379934   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:42.380083   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:42.380083   15128 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-267500-m02 && echo "ha-267500-m02" | sudo tee /etc/hostname
	I0428 17:09:42.534583   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-267500-m02
	
	I0428 17:09:42.534727   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:44.673497   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:44.674240   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:47.250295   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:47.257595   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:09:47.258189   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:09:47.258189   15128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-267500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-267500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-267500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 17:09:47.404787   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 17:09:47.404787   15128 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 17:09:47.404787   15128 buildroot.go:174] setting up certificates
	I0428 17:09:47.404787   15128 provision.go:84] configureAuth start
	I0428 17:09:47.404787   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:49.416138   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:51.875459   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:51.875853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:53.926853   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:53.927030   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:53.927102   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:09:56.411706   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:09:56.412682   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:56.412682   15128 provision.go:143] copyHostCerts
	I0428 17:09:56.412881   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 17:09:56.413201   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 17:09:56.413201   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 17:09:56.413699   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 17:09:56.414916   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 17:09:56.415172   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 17:09:56.415172   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 17:09:56.417043   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 17:09:56.417043   15128 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 17:09:56.417043   15128 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 17:09:56.417691   15128 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 17:09:56.418448   15128 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-267500-m02 san=[127.0.0.1 172.27.238.86 ha-267500-m02 localhost minikube]
	I0428 17:09:56.698158   15128 provision.go:177] copyRemoteCerts
	I0428 17:09:56.713232   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 17:09:56.713232   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:09:58.727438   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:09:58.728437   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:09:58.728572   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:01.200219   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:01.200219   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:01.303703   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5904121s)
	I0428 17:10:01.303703   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 17:10:01.304216   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 17:10:01.351115   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 17:10:01.351613   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0428 17:10:01.399941   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 17:10:01.400279   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 17:10:01.447643   15128 provision.go:87] duration metric: took 14.0428334s to configureAuth
	I0428 17:10:01.447643   15128 buildroot.go:189] setting minikube options for container-runtime
	I0428 17:10:01.448198   15128 config.go:182] Loaded profile config "ha-267500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 17:10:01.448388   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:03.468941   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:03.470041   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:05.919509   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:05.925618   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:05.926194   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:05.926194   15128 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 17:10:06.056503   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 17:10:06.056605   15128 buildroot.go:70] root file system type: tmpfs
	I0428 17:10:06.056795   15128 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 17:10:06.056855   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:08.084596   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:08.084681   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:10.593844   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:10.594210   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:10.600708   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:10.601470   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:10.601470   15128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.226.61"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 17:10:10.751881   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.226.61
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 17:10:10.751947   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:12.903901   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:12.904363   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:15.479691   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:15.479915   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:15.486849   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:15.487030   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:15.487030   15128 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 17:10:17.663081   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 17:10:17.663081   15128 machine.go:97] duration metric: took 44.506824s to provisionDockerMachine
	I0428 17:10:17.663081   15128 client.go:171] duration metric: took 1m52.570239s to LocalClient.Create
	I0428 17:10:17.663081   15128 start.go:167] duration metric: took 1m52.570239s to libmachine.API.Create "ha-267500"
	I0428 17:10:17.663081   15128 start.go:293] postStartSetup for "ha-267500-m02" (driver="hyperv")
	I0428 17:10:17.663081   15128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 17:10:17.677002   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 17:10:17.677002   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:19.758750   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:19.758853   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:22.318985   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:22.318985   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:22.423330   15128 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7463207s)
	I0428 17:10:22.436053   15128 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 17:10:22.443505   15128 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 17:10:22.443505   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 17:10:22.444052   15128 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 17:10:22.445207   15128 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 17:10:22.445207   15128 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 17:10:22.458722   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 17:10:22.477786   15128 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 17:10:22.526087   15128 start.go:296] duration metric: took 4.8629979s for postStartSetup
	I0428 17:10:22.528901   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:24.622004   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:27.083556   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:27.084100   15128 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-267500\config.json ...
	I0428 17:10:27.086385   15128 start.go:128] duration metric: took 2m1.9962875s to createHost
	I0428 17:10:27.086385   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:29.130169   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:29.131174   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:31.572065   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:31.572369   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:31.578077   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:31.578656   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:31.578656   15128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349431.710726684
	
	I0428 17:10:31.707789   15128 fix.go:216] guest clock: 1714349431.710726684
	I0428 17:10:31.707789   15128 fix.go:229] Guest: 2024-04-28 17:10:31.710726684 -0700 PDT Remote: 2024-04-28 17:10:27.0863856 -0700 PDT m=+326.552805801 (delta=4.624341084s)
	I0428 17:10:31.707789   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:33.768506   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:36.213446   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:36.218864   15128 main.go:141] libmachine: Using SSH client type: native
	I0428 17:10:36.219399   15128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.238.86 22 <nil> <nil>}
	I0428 17:10:36.219663   15128 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714349431
	I0428 17:10:36.353520   15128 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 00:10:31 UTC 2024
	
	I0428 17:10:36.353602   15128 fix.go:236] clock set: Mon Apr 29 00:10:31 UTC 2024
	 (err=<nil>)
	I0428 17:10:36.353602   15128 start.go:83] releasing machines lock for "ha-267500-m02", held for 2m11.26349s
	I0428 17:10:36.353795   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:38.401891   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:38.401930   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:40.883767   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:40.883929   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:40.887007   15128 out.go:177] * Found network options:
	I0428 17:10:40.889514   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.892316   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.894427   15128 out.go:177]   - NO_PROXY=172.27.226.61
	W0428 17:10:40.897007   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 17:10:40.898142   15128 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 17:10:40.900035   15128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 17:10:40.900035   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:40.912127   15128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 17:10:40.913152   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-267500-m02 ).state
	I0428 17:10:43.021173   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:43.021424   15128 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-267500-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.601643   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.602076   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.622078   15128 main.go:141] libmachine: [stdout =====>] : 172.27.238.86
	
	I0428 17:10:45.622258   15128 main.go:141] libmachine: [stderr =====>] : 
	I0428 17:10:45.622506   15128 sshutil.go:53] new ssh client: &{IP:172.27.238.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-267500-m02\id_rsa Username:docker}
	I0428 17:10:45.694842   15128 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7816825s)
	W0428 17:10:45.694980   15128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 17:10:45.707857   15128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 17:10:45.811368   15128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 17:10:45.811368   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:45.811368   15128 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.911325s)
	I0428 17:10:45.811813   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:45.869634   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 17:10:45.905032   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 17:10:45.930324   15128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 17:10:45.946027   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 17:10:45.978279   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.013710   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 17:10:46.061695   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 17:10:46.102008   15128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 17:10:46.135573   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 17:10:46.171642   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 17:10:46.204807   15128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 17:10:46.239021   15128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 17:10:46.271655   15128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 17:10:46.306942   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:46.514038   15128 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 17:10:46.544941   15128 start.go:494] detecting cgroup driver to use...
	I0428 17:10:46.560491   15128 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 17:10:46.605547   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.654104   15128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 17:10:46.708544   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 17:10:46.748048   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.784762   15128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 17:10:46.849187   15128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 17:10:46.873497   15128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 17:10:46.927545   15128 ssh_runner.go:195] Run: which cri-dockerd
	I0428 17:10:46.944545   15128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 17:10:46.962213   15128 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 17:10:47.010730   15128 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 17:10:47.237397   15128 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 17:10:47.429784   15128 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 17:10:47.429870   15128 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 17:10:47.474822   15128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 17:10:47.662962   15128 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 17:11:48.797471   15128 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1344114s)
	I0428 17:11:48.811984   15128 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0428 17:11:48.846867   15128 out.go:177] 
	W0428 17:11:48.851004   15128 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 00:10:16 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.119534579Z" level=info msg="Starting up"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.120740894Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 00:10:16 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:16.121661806Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.164120251Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189883081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.189945482Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190009182Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190026683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190220685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190263486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190520589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190669591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190716191Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190728492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.190839193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.191192898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194247737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194367638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194558841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194663742Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.194795944Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195368451Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.195462552Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220446573Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220530874Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220815977Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220940379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.220961379Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221231583Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.221822990Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222033793Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222143394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222181895Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222200695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222229595Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222251396Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222320897Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222367097Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222383497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222398798Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222414398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222438198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222458898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222474399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222508799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222524499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222540899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222555500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222572000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222588200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222612300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222628301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222643801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222659801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222679401Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222703802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222745302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222782703Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222911604Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222975905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.222992605Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223005105Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223156807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223197908Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.223212708Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229340687Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.229588390Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.230467901Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 00:10:16 ha-267500-m02 dockerd[668]: time="2024-04-29T00:10:16.231131810Z" level=info msg="containerd successfully booted in 0.070317s"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.196765446Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.225741894Z" level=info msg="Loading containers: start."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.520224287Z" level=info msg="Loading containers: done."
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.548826467Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.549157372Z" level=info msg="Daemon has completed initialization"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663745997Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 00:10:17 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:17.663852398Z" level=info msg="API listen on [::]:2376"
	Apr 29 00:10:17 ha-267500-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 00:10:47 ha-267500-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.694032846Z" level=info msg="Processing signal 'terminated'"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696514258Z" level=info msg="Daemon shutdown complete"
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696708859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696755859Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 29 00:10:47 ha-267500-m02 dockerd[662]: time="2024-04-29T00:10:47.696775959Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 00:10:48 ha-267500-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 00:10:48 ha-267500-m02 dockerd[1016]: time="2024-04-29T00:10:48.770678285Z" level=info msg="Starting up"
	Apr 29 00:11:48 ha-267500-m02 dockerd[1016]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 00:11:48 ha-267500-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0428 17:11:48.851004   15128 out.go:239] * 
	W0428 17:11:48.852842   15128 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 17:11:48.855427   15128 out.go:177] 
	
	
	==> Docker <==
	Apr 29 00:30:05 ha-267500 dockerd[1316]: 2024/04/29 00:30:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:31:11 ha-267500 dockerd[1316]: 2024/04/29 00:31:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:32:45 ha-267500 dockerd[1316]: 2024/04/29 00:32:45 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:32:45 ha-267500 dockerd[1316]: 2024/04/29 00:32:45 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:32:46 ha-267500 dockerd[1316]: 2024/04/29 00:32:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:32:46 ha-267500 dockerd[1316]: 2024/04/29 00:32:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:32:46 ha-267500 dockerd[1316]: 2024/04/29 00:32:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:32:46 ha-267500 dockerd[1316]: 2024/04/29 00:32:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:32:46 ha-267500 dockerd[1316]: 2024/04/29 00:32:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:32:46 ha-267500 dockerd[1316]: 2024/04/29 00:32:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:33:29 ha-267500 dockerd[1316]: 2024/04/29 00:33:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:33:29 ha-267500 dockerd[1316]: 2024/04/29 00:33:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:33:29 ha-267500 dockerd[1316]: 2024/04/29 00:33:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:33:29 ha-267500 dockerd[1316]: 2024/04/29 00:33:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:33:29 ha-267500 dockerd[1316]: 2024/04/29 00:33:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:33:29 ha-267500 dockerd[1316]: 2024/04/29 00:33:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:33:29 ha-267500 dockerd[1316]: 2024/04/29 00:33:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 00:33:30 ha-267500 dockerd[1316]: 2024/04/29 00:33:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8d1eabc40263       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Running             busybox                   0                   9e5d506c62d64       busybox-fc5497c4f-5xln2
	863860b786b42       cbb01a7bd410d                                                                                         27 minutes ago      Running             coredns                   0                   c1f590ad490fe       coredns-7db6d8ff4d-p7tjz
	f85260746d557       cbb01a7bd410d                                                                                         27 minutes ago      Running             coredns                   0                   586f91a6b0d3d       coredns-7db6d8ff4d-2d6ct
	f23ff280b691c       6e38f40d628db                                                                                         27 minutes ago      Running             storage-provisioner       0                   4f7c6837c24bd       storage-provisioner
	31e97721c439f       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              27 minutes ago      Running             kindnet-cni               0                   9a810f16fad2b       kindnet-6pr2b
	b505176bff8dd       a0bf559e280cf                                                                                         27 minutes ago      Running             kube-proxy                0                   f041e2ebf6955       kube-proxy-59kz7
	e8de8cc5d0941       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     28 minutes ago      Running             kube-vip                  0                   5e6adedaca2d1       kube-vip-ha-267500
	1bb77467f58fc       3861cfcd7c04c                                                                                         28 minutes ago      Running             etcd                      0                   bd2f63e7ff884       etcd-ha-267500
	e3f1a76ec8d43       c42f13656d0b2                                                                                         28 minutes ago      Running             kube-apiserver            0                   1aac39df0e147       kube-apiserver-ha-267500
	8e1e8e3ae83a4       259c8277fcbbc                                                                                         28 minutes ago      Running             kube-scheduler            0                   59e9e09e1fe2e       kube-scheduler-ha-267500
	988ba6e93dbd2       c7aad43836fa5                                                                                         28 minutes ago      Running             kube-controller-manager   0                   b062edd237fa4       kube-controller-manager-ha-267500
	
	
	==> coredns [863860b786b4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56042 - 38920 "HINFO IN 6310058863699759000.886894576477842994. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026858243s
	[INFO] 10.244.0.4:52183 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.109912239s
	[INFO] 10.244.0.4:36966 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.019781143s
	[INFO] 10.244.0.4:50436 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.124688347s
	[INFO] 10.244.0.4:39307 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231401s
	[INFO] 10.244.0.4:48774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000438101s
	[INFO] 10.244.0.4:55657 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001919s
	[INFO] 10.244.0.4:39536 - 5 "PTR IN 1.224.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000243301s
	
	
	==> coredns [f85260746d55] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52661 - 10332 "HINFO IN 6890724632724915343.2842102422429648823. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049972505s
	[INFO] 10.244.0.4:36002 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189801s
	[INFO] 10.244.0.4:39517 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002061s
	[INFO] 10.244.0.4:58443 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.132665688s
	[INFO] 10.244.0.4:58628 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000428701s
	[INFO] 10.244.0.4:35412 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002027s
	[INFO] 10.244.0.4:55943 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.02269265s
	[INFO] 10.244.0.4:41245 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000423501s
	[INFO] 10.244.0.4:57855 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168501s
	[INFO] 10.244.0.4:59251 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000973s
	[INFO] 10.244.0.4:49224 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000193501s
	[INFO] 10.244.0.4:39630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002705s
	[INFO] 10.244.0.4:33915 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000299901s
	[INFO] 10.244.0.4:44933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000954s
	
	
	==> describe nodes <==
	Name:               ha-267500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-267500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-267500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T17_08_04_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:08:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-267500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:36:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:33:03 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:33:03 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:33:03 +0000   Mon, 29 Apr 2024 00:07:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:33:03 +0000   Mon, 29 Apr 2024 00:08:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.226.61
	  Hostname:    ha-267500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 077cacd754b64c3dad0beeef28749850
	  System UUID:                961ce819-6c1b-c24a-99df-3205dca32605
	  Boot ID:                    bb08693c-1f82-4307-a58c-bdcce00f2d7a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5xln2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-2d6ct             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 coredns-7db6d8ff4d-p7tjz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-ha-267500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kindnet-6pr2b                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-267500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-ha-267500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-59kz7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-267500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-vip-ha-267500                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27m   kube-proxy       
	  Normal  NodeHasSufficientMemory  28m   kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  Starting                 28m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m   kubelet          Node ha-267500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m   kubelet          Node ha-267500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m   kubelet          Node ha-267500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m   node-controller  Node ha-267500 event: Registered Node ha-267500 in Controller
	  Normal  NodeReady                27m   kubelet          Node ha-267500 status is now: NodeReady
	
	
	Name:               ha-267500-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-267500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-267500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_28T17_28_02_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:28:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-267500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:36:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:33:37 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:33:37 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:33:37 +0000   Mon, 29 Apr 2024 00:28:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:33:37 +0000   Mon, 29 Apr 2024 00:28:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.233.131
	  Hostname:    ha-267500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0562d38ef374b73969ab15fed947e11
	  System UUID:                c94a104a-b670-854e-ac89-f41b3533cc69
	  Boot ID:                    bca10429-bddd-4547-8fb0-c50d93740969
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jxx6x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-mspbr              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m7s
	  kube-system                 kube-proxy-jcph5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 7m58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  8m7s (x2 over 8m7s)  kubelet          Node ha-267500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m7s (x2 over 8m7s)  kubelet          Node ha-267500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m7s (x2 over 8m7s)  kubelet          Node ha-267500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m4s                 node-controller  Node ha-267500-m03 event: Registered Node ha-267500-m03 in Controller
	  Normal  NodeReady                7m50s                kubelet          Node ha-267500-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr29 00:06] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.760915] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.419480] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.183676] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[Apr29 00:07] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.112445] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.557599] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.220083] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.252325] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.857578] systemd-fstab-generator[1178]: Ignoring "noauto" option for root device
	[  +0.206645] systemd-fstab-generator[1190]: Ignoring "noauto" option for root device
	[  +0.195057] systemd-fstab-generator[1202]: Ignoring "noauto" option for root device
	[  +0.281554] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[ +11.671296] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.127733] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.851029] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +6.965698] systemd-fstab-generator[1723]: Ignoring "noauto" option for root device
	[  +0.101314] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.149606] kauditd_printk_skb: 67 callbacks suppressed
	[Apr29 00:08] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[ +14.798165] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.098725] kauditd_printk_skb: 29 callbacks suppressed
	[Apr29 00:12] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [1bb77467f58f] <==
	{"level":"warn","ts":"2024-04-29T00:28:01.015001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.747946ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11906267438321686993 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2563 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911183 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T00:28:01.015171Z","caller":"traceutil/trace.go:171","msg":"trace[1066403778] linearizableReadLoop","detail":"{readStateIndex:2839; appliedIndex:2838; }","duration":"166.002504ms","start":"2024-04-29T00:28:00.849156Z","end":"2024-04-29T00:28:01.015158Z","steps":["trace[1066403778] 'read index received'  (duration: 64.927058ms)","trace[1066403778] 'applied index is now lower than readState.Index'  (duration: 101.074346ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T00:28:01.015567Z","caller":"traceutil/trace.go:171","msg":"trace[1444860087] transaction","detail":"{read_only:false; response_revision:2582; number_of_response:1; }","duration":"309.347954ms","start":"2024-04-29T00:28:00.706202Z","end":"2024-04-29T00:28:01.01555Z","steps":["trace[1444860087] 'process raft request'  (duration: 207.946307ms)","trace[1444860087] 'compare'  (duration: 100.659345ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:01.015577Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.380105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/kube-system/bootstrap-token-46antb\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:01.015843Z","caller":"traceutil/trace.go:171","msg":"trace[1574012410] range","detail":"{range_begin:/registry/secrets/kube-system/bootstrap-token-46antb; range_end:; response_count:0; response_revision:2582; }","duration":"166.706906ms","start":"2024-04-29T00:28:00.849128Z","end":"2024-04-29T00:28:01.015834Z","steps":["trace[1574012410] 'agreement among raft nodes before linearized reading'  (duration: 166.065204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:01.015715Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:00.706185Z","time spent":"309.436654ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2563 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911183 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >"}
	{"level":"info","ts":"2024-04-29T00:28:01.13236Z","caller":"traceutil/trace.go:171","msg":"trace[848518735] transaction","detail":"{read_only:false; response_revision:2583; number_of_response:1; }","duration":"106.51056ms","start":"2024-04-29T00:28:01.02575Z","end":"2024-04-29T00:28:01.132261Z","steps":["trace[848518735] 'process raft request'  (duration: 100.002844ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:28:10.719709Z","caller":"traceutil/trace.go:171","msg":"trace[688876790] transaction","detail":"{read_only:false; response_revision:2633; number_of_response:1; }","duration":"131.602022ms","start":"2024-04-29T00:28:10.588085Z","end":"2024-04-29T00:28:10.719687Z","steps":["trace[688876790] 'process raft request'  (duration: 131.335422ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:11.057116Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.908169ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11906267438321687140 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:253b8f272df6da63>","response":"size:41"}
	{"level":"info","ts":"2024-04-29T00:28:11.057309Z","caller":"traceutil/trace.go:171","msg":"trace[730869850] linearizableReadLoop","detail":"{readStateIndex:2894; appliedIndex:2893; }","duration":"310.80146ms","start":"2024-04-29T00:28:10.746493Z","end":"2024-04-29T00:28:11.057294Z","steps":["trace[730869850] 'read index received'  (duration: 118.63939ms)","trace[730869850] 'applied index is now lower than readState.Index'  (duration: 192.16047ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.057392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"310.91436ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:11.057434Z","caller":"traceutil/trace.go:171","msg":"trace[965932074] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2633; }","duration":"310.98056ms","start":"2024-04-29T00:28:10.746443Z","end":"2024-04-29T00:28:11.057424Z","steps":["trace[965932074] 'agreement among raft nodes before linearized reading'  (duration: 310.91126ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:11.057458Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:10.746429Z","time spent":"311.02236ms","remote":"127.0.0.1:52498","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-04-29T00:28:11.057874Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:10.721431Z","time spent":"336.441923ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-04-29T00:28:11.410781Z","caller":"traceutil/trace.go:171","msg":"trace[921900368] linearizableReadLoop","detail":"{readStateIndex:2895; appliedIndex:2894; }","duration":"284.369895ms","start":"2024-04-29T00:28:11.126395Z","end":"2024-04-29T00:28:11.410765Z","steps":["trace[921900368] 'read index received'  (duration: 193.861274ms)","trace[921900368] 'applied index is now lower than readState.Index'  (duration: 90.507421ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.411124Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.711696ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-267500-m03\" ","response":"range_response_count:1 size:2813"}
	{"level":"info","ts":"2024-04-29T00:28:11.41123Z","caller":"traceutil/trace.go:171","msg":"trace[1500780481] range","detail":"{range_begin:/registry/minions/ha-267500-m03; range_end:; response_count:1; response_revision:2634; }","duration":"284.831096ms","start":"2024-04-29T00:28:11.126391Z","end":"2024-04-29T00:28:11.411222Z","steps":["trace[1500780481] 'agreement among raft nodes before linearized reading'  (duration: 284.474795ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:28:11.411809Z","caller":"traceutil/trace.go:171","msg":"trace[1062724437] transaction","detail":"{read_only:false; response_revision:2634; number_of_response:1; }","duration":"351.77576ms","start":"2024-04-29T00:28:11.059046Z","end":"2024-04-29T00:28:11.410821Z","steps":["trace[1062724437] 'process raft request'  (duration: 261.137839ms)","trace[1062724437] 'compare'  (duration: 90.397121ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:28:11.412239Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:28:11.059032Z","time spent":"352.927263ms","remote":"127.0.0.1:52526","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.27.226.61\" mod_revision:2582 > success:<request_put:<key:\"/registry/masterleases/172.27.226.61\" value_size:66 lease:2682895401466911331 >> failure:<request_range:<key:\"/registry/masterleases/172.27.226.61\" > >"}
	{"level":"warn","ts":"2024-04-29T00:28:16.429655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.224744ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:16.429988Z","caller":"traceutil/trace.go:171","msg":"trace[1266991256] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2651; }","duration":"181.540745ms","start":"2024-04-29T00:28:16.248407Z","end":"2024-04-29T00:28:16.429948Z","steps":["trace[1266991256] 'range keys from in-memory index tree'  (duration: 181.210444ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:31:58.46072Z","caller":"traceutil/trace.go:171","msg":"trace[1217074224] transaction","detail":"{read_only:false; response_revision:3091; number_of_response:1; }","duration":"104.985672ms","start":"2024-04-29T00:31:58.355715Z","end":"2024-04-29T00:31:58.460701Z","steps":["trace[1217074224] 'process raft request'  (duration: 67.769676ms)","trace[1217074224] 'compare'  (duration: 36.776895ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T00:32:56.706988Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2573}
	{"level":"info","ts":"2024-04-29T00:32:56.715678Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2573,"took":"8.393022ms","hash":2612196233,"current-db-size-bytes":2490368,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1978368,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-04-29T00:32:56.715794Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2612196233,"revision":2573,"compact-revision":2039}
	
	
	==> kernel <==
	 00:36:10 up 30 min,  0 users,  load average: 0.08, 0.27, 0.32
	Linux ha-267500 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [31e97721c439] <==
	I0429 00:35:06.919100       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:35:16.926100       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:35:16.926213       1 main.go:227] handling current node
	I0429 00:35:16.926226       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:35:16.926233       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:35:26.942173       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:35:26.942216       1 main.go:227] handling current node
	I0429 00:35:26.942226       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:35:26.942232       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:35:36.955690       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:35:36.955796       1 main.go:227] handling current node
	I0429 00:35:36.955809       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:35:36.955817       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:35:46.963545       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:35:46.963694       1 main.go:227] handling current node
	I0429 00:35:46.963708       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:35:46.963716       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:35:56.979802       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:35:56.979953       1 main.go:227] handling current node
	I0429 00:35:56.980018       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:35:56.980032       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	I0429 00:36:06.990260       1 main.go:223] Handling node with IPs: map[172.27.226.61:{}]
	I0429 00:36:06.990510       1 main.go:227] handling current node
	I0429 00:36:06.990525       1 main.go:223] Handling node with IPs: map[172.27.233.131:{}]
	I0429 00:36:06.990533       1 main.go:250] Node ha-267500-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e3f1a76ec8d4] <==
	I0429 00:08:00.626826       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 00:08:01.319490       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0429 00:08:02.484116       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0429 00:08:02.484213       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0429 00:08:02.484272       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.5µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0429 00:08:02.485404       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0429 00:08:02.486881       1 timeout.go:142] post-timeout activity - time-elapsed: 2.861712ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0429 00:08:02.642721       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 00:08:02.684736       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 00:08:02.712741       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 00:08:15.229730       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 00:08:15.308254       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 00:23:49.502033       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49293: use of closed network connection
	E0429 00:23:50.824153       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49301: use of closed network connection
	E0429 00:23:51.986308       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49309: use of closed network connection
	E0429 00:24:25.826543       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49329: use of closed network connection
	E0429 00:24:36.281538       1 conn.go:339] Error on socket receive: read tcp 172.27.239.254:8443->172.27.224.1:49332: use of closed network connection
	I0429 00:27:56.312329       1 trace.go:236] Trace[1132022318]: "Update" accept:application/json, */*,audit-id:b430ffa2-60e5-4395-a53d-a8ebd619d367,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (29-Apr-2024 00:27:55.724) (total time: 587ms):
	Trace[1132022318]: ["GuaranteedUpdate etcd3" audit-id:b430ffa2-60e5-4395-a53d-a8ebd619d367,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 587ms (00:27:55.725)
	Trace[1132022318]:  ---"Txn call completed" 586ms (00:27:56.312)]
	Trace[1132022318]: [587.55203ms] [587.55203ms] END
	I0429 00:28:11.413223       1 trace.go:236] Trace[768089845]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.27.226.61,type:*v1.Endpoints,resource:apiServerIPInfo (29-Apr-2024 00:28:10.678) (total time: 734ms):
	Trace[768089845]: ---"Transaction prepared" 338ms (00:28:11.058)
	Trace[768089845]: ---"Txn call completed" 354ms (00:28:11.413)
	Trace[768089845]: [734.530496ms] [734.530496ms] END
	
	
	==> kube-controller-manager [988ba6e93dbd] <==
	I0429 00:08:29.407024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="145.8µs"
	I0429 00:08:29.410999       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.9µs"
	I0429 00:08:29.438715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58µs"
	I0429 00:08:29.463289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.4µs"
	I0429 00:08:30.150197       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 00:08:32.178168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.562718ms"
	I0429 00:08:32.178767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="128.296µs"
	I0429 00:08:32.227761       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.198293ms"
	I0429 00:08:32.228518       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.397µs"
	I0429 00:12:22.804126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.965383ms"
	I0429 00:12:22.823038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.733135ms"
	I0429 00:12:22.823277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.2µs"
	I0429 00:12:22.828995       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.1µs"
	I0429 00:12:22.829468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.999µs"
	I0429 00:12:25.591541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.187606ms"
	I0429 00:12:25.591791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="155.1µs"
	I0429 00:28:02.170352       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-267500-m03\" does not exist"
	I0429 00:28:02.230498       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-267500-m03" podCIDRs=["10.244.1.0/24"]
	I0429 00:28:05.393266       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-267500-m03"
	I0429 00:28:19.456843       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-267500-m03"
	I0429 00:28:19.485470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.1µs"
	I0429 00:28:19.487549       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.6µs"
	I0429 00:28:19.505362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.4µs"
	I0429 00:28:22.722440       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.931424ms"
	I0429 00:28:22.722950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.301µs"
	
	
	==> kube-proxy [b505176bff8d] <==
	I0429 00:08:18.378677       1 server_linux.go:69] "Using iptables proxy"
	I0429 00:08:18.445828       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.226.61"]
	I0429 00:08:18.505105       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:08:18.505147       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:08:18.505201       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:08:18.511281       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:08:18.512271       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:08:18.512309       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:08:18.516363       1 config.go:192] "Starting service config controller"
	I0429 00:08:18.517198       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:08:18.517237       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:08:18.517245       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:08:18.524551       1 config.go:319] "Starting node config controller"
	I0429 00:08:18.524570       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:08:18.618172       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 00:08:18.618299       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:08:18.624657       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8e1e8e3ae83a] <==
	W0429 00:07:59.408672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 00:07:59.409434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 00:07:59.614629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.614883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.614630       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.616141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.671538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 00:07:59.671604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 00:07:59.688105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 00:07:59.688348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 00:07:59.699454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 00:07:59.699500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 00:07:59.827114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 00:07:59.827663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 00:07:59.863569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 00:07:59.864226       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 00:07:59.922434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 00:07:59.922488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 00:07:59.934988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 00:07:59.935206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 00:07:59.935823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 00:07:59.936001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 00:07:59.940321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 00:07:59.940831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0429 00:08:01.614591       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 00:32:02 ha-267500 kubelet[2223]: E0429 00:32:02.768148    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:32:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:32:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:32:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:32:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:33:02 ha-267500 kubelet[2223]: E0429 00:33:02.780392    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:33:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:33:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:33:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:33:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:34:02 ha-267500 kubelet[2223]: E0429 00:34:02.767613    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:34:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:34:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:34:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:34:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:35:02 ha-267500 kubelet[2223]: E0429 00:35:02.770481    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:35:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:35:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:35:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:35:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:36:02 ha-267500 kubelet[2223]: E0429 00:36:02.771973    2223 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:36:02 ha-267500 kubelet[2223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:36:02 ha-267500 kubelet[2223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:36:02 ha-267500 kubelet[2223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:36:02 ha-267500 kubelet[2223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:36:02.624756    5440 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-267500 -n ha-267500: (11.7114494s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-267500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-wg44s
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-267500 describe pod busybox-fc5497c4f-wg44s
helpers_test.go:282: (dbg) kubectl --context ha-267500 describe pod busybox-fc5497c4f-wg44s:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-wg44s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bv7kl (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-bv7kl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  8m51s (x5 over 24m)   default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  2m51s (x3 over 8m4s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (160.36s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (55.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- exec busybox-fc5497c4f-4fdn6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- exec busybox-fc5497c4f-4fdn6 -- sh -c "ping -c 1 172.27.224.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- exec busybox-fc5497c4f-4fdn6 -- sh -c "ping -c 1 172.27.224.1": exit status 1 (10.4506914s)

                                                
                                                
-- stdout --
	PING 172.27.224.1 (172.27.224.1): 56 data bytes
	
	--- 172.27.224.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 18:12:47.074366   10100 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.27.224.1) from pod (busybox-fc5497c4f-4fdn6): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- exec busybox-fc5497c4f-4qvlm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- exec busybox-fc5497c4f-4qvlm -- sh -c "ping -c 1 172.27.224.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- exec busybox-fc5497c4f-4qvlm -- sh -c "ping -c 1 172.27.224.1": exit status 1 (10.4300249s)

                                                
                                                
-- stdout --
	PING 172.27.224.1 (172.27.224.1): 56 data bytes
	
	--- 172.27.224.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 18:12:57.962241    9108 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.27.224.1) from pod (busybox-fc5497c4f-4qvlm): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-788600 -n multinode-788600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-788600 -n multinode-788600: (11.5162097s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 logs -n 25: (8.1502905s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-995600 ssh -- ls                    | mount-start-2-995600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:01 PDT | 28 Apr 24 18:02 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-995600                           | mount-start-1-995600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:02 PDT | 28 Apr 24 18:02 PDT |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-995600 ssh -- ls                    | mount-start-2-995600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:02 PDT | 28 Apr 24 18:02 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-995600                           | mount-start-2-995600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:02 PDT | 28 Apr 24 18:03 PDT |
	| start   | -p mount-start-2-995600                           | mount-start-2-995600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:03 PDT | 28 Apr 24 18:05 PDT |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host         | mount-start-2-995600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:05 PDT |                     |
	|         | --profile mount-start-2-995600 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-995600 ssh -- ls                    | mount-start-2-995600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:05 PDT | 28 Apr 24 18:05 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-995600                           | mount-start-2-995600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:05 PDT | 28 Apr 24 18:05 PDT |
	| delete  | -p mount-start-1-995600                           | mount-start-1-995600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:05 PDT | 28 Apr 24 18:05 PDT |
	| start   | -p multinode-788600                               | multinode-788600     | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:05 PDT | 28 Apr 24 18:12 PDT |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-788600 -- apply -f                   | multinode-788600     | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:12 PDT | 28 Apr 24 18:12 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-788600 -- rollout                    | multinode-788600     | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:12 PDT | 28 Apr 24 18:12 PDT |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-788600 -- get pods -o                | multinode-788600     | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:12 PDT | 28 Apr 24 18:12 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-788600 -- get pods -o                | multinode-788600     | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:12 PDT | 28 Apr 24 18:12 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-788600 -- exec                       | multinode-788600     | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:12 PDT | 28 Apr 24 18:12 PDT |
	|         | busybox-fc5497c4f-4fdn6 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-788600 -- exec                       | multinode-788600     | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:12 PDT | 28 Apr 24 18:12 PDT |
	|         | busybox-fc5497c4f-4qvlm --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-788600 -- exec                       | multinode-788600     | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:12 PDT | 28 Apr 24 18:12 PDT |
	|         | busybox-fc5497c4f-4fdn6 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-788600 -- exec                       | multinode-788600     | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:12 PDT | 28 Apr 24 18:12 PDT |
	|         | busybox-fc5497c4f-4qvlm --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-788600 -- exec                       | multinode-788600     | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:12 PDT | 28 Apr 24 18:12 PDT |
	|         | busybox-fc5497c4f-4fdn6 -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-788600 -- exec                       | multinode-788600     | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:12 PDT | 28 Apr 24 18:12 PDT |
	|         | busybox-fc5497c4f-4qvlm -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-788600 -- get pods -o                | multinode-788600     | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:12 PDT | 28 Apr 24 18:12 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-788600 -- exec                       | multinode-788600     | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:12 PDT | 28 Apr 24 18:12 PDT |
	|         | busybox-fc5497c4f-4fdn6                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-788600 -- exec                       | multinode-788600     | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:12 PDT |                     |
	|         | busybox-fc5497c4f-4fdn6 -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.224.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-788600 -- exec                       | multinode-788600     | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:12 PDT | 28 Apr 24 18:12 PDT |
	|         | busybox-fc5497c4f-4qvlm                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-788600 -- exec                       | multinode-788600     | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:12 PDT |                     |
	|         | busybox-fc5497c4f-4qvlm -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.224.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 18:05:47
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 18:05:47.666414    6096 out.go:291] Setting OutFile to fd 1616 ...
	I0428 18:05:47.667126    6096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 18:05:47.667126    6096 out.go:304] Setting ErrFile to fd 1072...
	I0428 18:05:47.667223    6096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 18:05:47.692082    6096 out.go:298] Setting JSON to false
	I0428 18:05:47.695988    6096 start.go:129] hostinfo: {"hostname":"minikube1","uptime":10190,"bootTime":1714342556,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 18:05:47.696091    6096 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 18:05:47.701262    6096 out.go:177] * [multinode-788600] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 18:05:47.705181    6096 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:05:47.705181    6096 notify.go:220] Checking for updates...
	I0428 18:05:47.708238    6096 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 18:05:47.711025    6096 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 18:05:47.714663    6096 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 18:05:47.716906    6096 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 18:05:47.719873    6096 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 18:05:52.771203    6096 out.go:177] * Using the hyperv driver based on user configuration
	I0428 18:05:52.774423    6096 start.go:297] selected driver: hyperv
	I0428 18:05:52.774544    6096 start.go:901] validating driver "hyperv" against <nil>
	I0428 18:05:52.774642    6096 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 18:05:52.830547    6096 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 18:05:52.831640    6096 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 18:05:52.831640    6096 cni.go:84] Creating CNI manager for ""
	I0428 18:05:52.831640    6096 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0428 18:05:52.831640    6096 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0428 18:05:52.832525    6096 start.go:340] cluster config:
	{Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 18:05:52.832751    6096 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 18:05:52.837101    6096 out.go:177] * Starting "multinode-788600" primary control-plane node in "multinode-788600" cluster
	I0428 18:05:52.840556    6096 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 18:05:52.840556    6096 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 18:05:52.840556    6096 cache.go:56] Caching tarball of preloaded images
	I0428 18:05:52.841503    6096 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 18:05:52.841815    6096 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 18:05:52.842035    6096 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:05:52.842035    6096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json: {Name:mk265ba6b07bce9e204b72381c5b8e47fafeb342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:05:52.842822    6096 start.go:360] acquireMachinesLock for multinode-788600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 18:05:52.843874    6096 start.go:364] duration metric: took 1.0035ms to acquireMachinesLock for "multinode-788600"
	I0428 18:05:52.843993    6096 start.go:93] Provisioning new machine with config: &{Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 18:05:52.843993    6096 start.go:125] createHost starting for "" (driver="hyperv")
	I0428 18:05:52.846581    6096 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 18:05:52.847767    6096 start.go:159] libmachine.API.Create for "multinode-788600" (driver="hyperv")
	I0428 18:05:52.847767    6096 client.go:168] LocalClient.Create starting
	I0428 18:05:52.848009    6096 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 18:05:52.848620    6096 main.go:141] libmachine: Decoding PEM data...
	I0428 18:05:52.848709    6096 main.go:141] libmachine: Parsing certificate...
	I0428 18:05:52.848864    6096 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 18:05:52.848864    6096 main.go:141] libmachine: Decoding PEM data...
	I0428 18:05:52.848864    6096 main.go:141] libmachine: Parsing certificate...
	I0428 18:05:52.848864    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 18:05:54.830731    6096 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 18:05:54.830731    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:05:54.830731    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 18:05:56.522591    6096 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 18:05:56.523425    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:05:56.523646    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 18:05:57.991618    6096 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 18:05:57.991873    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:05:57.992026    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 18:06:01.458940    6096 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 18:06:01.459955    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:01.462234    6096 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 18:06:01.968602    6096 main.go:141] libmachine: Creating SSH key...
	I0428 18:06:02.471788    6096 main.go:141] libmachine: Creating VM...
	I0428 18:06:02.471788    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 18:06:05.274935    6096 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 18:06:05.274935    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:05.274935    6096 main.go:141] libmachine: Using switch "Default Switch"
	I0428 18:06:05.274935    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 18:06:06.999847    6096 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 18:06:07.000457    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:07.000457    6096 main.go:141] libmachine: Creating VHD
	I0428 18:06:07.000457    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 18:06:10.550435    6096 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : AA492B9A-7E54-4AE0-B94D-007BF3036558
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 18:06:10.550558    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:10.550558    6096 main.go:141] libmachine: Writing magic tar header
	I0428 18:06:10.550657    6096 main.go:141] libmachine: Writing SSH key tar header
	I0428 18:06:10.559870    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 18:06:13.611289    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:06:13.611289    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:13.611289    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\disk.vhd' -SizeBytes 20000MB
	I0428 18:06:16.026330    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:06:16.026330    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:16.026777    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-788600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 18:06:19.535561    6096 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-788600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 18:06:19.536538    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:19.536655    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-788600 -DynamicMemoryEnabled $false
	I0428 18:06:21.658017    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:06:21.658017    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:21.658106    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-788600 -Count 2
	I0428 18:06:23.737491    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:06:23.737491    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:23.737491    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-788600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\boot2docker.iso'
	I0428 18:06:26.223243    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:06:26.223243    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:26.223349    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-788600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\disk.vhd'
	I0428 18:06:28.757930    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:06:28.757930    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:28.758946    6096 main.go:141] libmachine: Starting VM...
	I0428 18:06:28.759059    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-788600
	I0428 18:06:31.755718    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:06:31.755718    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:31.755718    6096 main.go:141] libmachine: Waiting for host to start...
	I0428 18:06:31.755718    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:06:33.887265    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:06:33.887493    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:33.887669    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:06:36.306093    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:06:36.306093    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:37.308017    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:06:39.390465    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:06:39.390465    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:39.391069    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:06:41.823258    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:06:41.823327    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:42.833499    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:06:44.894231    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:06:44.894435    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:44.894519    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:06:47.352827    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:06:47.352827    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:48.362949    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:06:50.442898    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:06:50.442898    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:50.443102    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:06:52.846007    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:06:52.846007    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:53.857485    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:06:55.947610    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:06:55.947610    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:55.947610    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:06:58.445834    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:06:58.445834    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:06:58.445934    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:07:00.430530    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:07:00.431538    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:00.431538    6096 machine.go:94] provisionDockerMachine start ...
	I0428 18:07:00.431538    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:07:02.465895    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:07:02.465895    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:02.465895    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:07:04.879922    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:07:04.880607    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:04.890244    6096 main.go:141] libmachine: Using SSH client type: native
	I0428 18:07:04.900921    6096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.231.169 22 <nil> <nil>}
	I0428 18:07:04.900921    6096 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 18:07:05.050838    6096 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 18:07:05.050838    6096 buildroot.go:166] provisioning hostname "multinode-788600"
	I0428 18:07:05.051019    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:07:07.010043    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:07:07.010102    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:07.010102    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:07:09.486297    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:07:09.486297    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:09.493490    6096 main.go:141] libmachine: Using SSH client type: native
	I0428 18:07:09.494345    6096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.231.169 22 <nil> <nil>}
	I0428 18:07:09.494345    6096 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-788600 && echo "multinode-788600" | sudo tee /etc/hostname
	I0428 18:07:09.663773    6096 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-788600
	
	I0428 18:07:09.663849    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:07:11.704707    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:07:11.704764    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:11.704764    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:07:14.167601    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:07:14.167601    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:14.174619    6096 main.go:141] libmachine: Using SSH client type: native
	I0428 18:07:14.175252    6096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.231.169 22 <nil> <nil>}
	I0428 18:07:14.175252    6096 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-788600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-788600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-788600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 18:07:14.335744    6096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 18:07:14.335744    6096 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 18:07:14.335744    6096 buildroot.go:174] setting up certificates
	I0428 18:07:14.335744    6096 provision.go:84] configureAuth start
	I0428 18:07:14.335744    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:07:16.368232    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:07:16.368541    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:16.368541    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:07:18.828182    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:07:18.828182    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:18.828336    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:07:20.858096    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:07:20.858096    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:20.858810    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:07:23.306024    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:07:23.306376    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:23.306376    6096 provision.go:143] copyHostCerts
	I0428 18:07:23.306559    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 18:07:23.306902    6096 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 18:07:23.306902    6096 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 18:07:23.307400    6096 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 18:07:23.308655    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 18:07:23.308899    6096 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 18:07:23.309012    6096 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 18:07:23.309376    6096 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 18:07:23.310667    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 18:07:23.310957    6096 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 18:07:23.311074    6096 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 18:07:23.311379    6096 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 18:07:23.312790    6096 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-788600 san=[127.0.0.1 172.27.231.169 localhost minikube multinode-788600]
	I0428 18:07:23.539222    6096 provision.go:177] copyRemoteCerts
	I0428 18:07:23.552256    6096 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 18:07:23.552256    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:07:25.596651    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:07:25.596651    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:25.596746    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:07:27.996357    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:07:27.996357    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:27.996357    6096 sshutil.go:53] new ssh client: &{IP:172.27.231.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:07:28.098906    6096 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5466414s)
	I0428 18:07:28.098906    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 18:07:28.099196    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0428 18:07:28.146468    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 18:07:28.147934    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 18:07:28.192961    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 18:07:28.193853    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0428 18:07:28.240747    6096 provision.go:87] duration metric: took 13.9049774s to configureAuth
	I0428 18:07:28.240833    6096 buildroot.go:189] setting minikube options for container-runtime
	I0428 18:07:28.241596    6096 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:07:28.241692    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:07:30.281797    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:07:30.282421    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:30.282421    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:07:32.749241    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:07:32.749406    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:32.755826    6096 main.go:141] libmachine: Using SSH client type: native
	I0428 18:07:32.755992    6096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.231.169 22 <nil> <nil>}
	I0428 18:07:32.755992    6096 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 18:07:32.898704    6096 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 18:07:32.898704    6096 buildroot.go:70] root file system type: tmpfs
	I0428 18:07:32.898704    6096 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 18:07:32.898704    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:07:34.931130    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:07:34.931350    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:34.931350    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:07:37.369057    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:07:37.369356    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:37.374538    6096 main.go:141] libmachine: Using SSH client type: native
	I0428 18:07:37.375285    6096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.231.169 22 <nil> <nil>}
	I0428 18:07:37.375285    6096 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 18:07:37.527549    6096 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 18:07:37.527549    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:07:39.543188    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:07:39.543188    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:39.543330    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:07:42.002529    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:07:42.002529    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:42.009079    6096 main.go:141] libmachine: Using SSH client type: native
	I0428 18:07:42.009907    6096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.231.169 22 <nil> <nil>}
	I0428 18:07:42.009907    6096 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 18:07:44.162247    6096 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 18:07:44.162247    6096 machine.go:97] duration metric: took 43.730627s to provisionDockerMachine
	I0428 18:07:44.162247    6096 client.go:171] duration metric: took 1m51.3142702s to LocalClient.Create
	I0428 18:07:44.162247    6096 start.go:167] duration metric: took 1m51.3142702s to libmachine.API.Create "multinode-788600"
	I0428 18:07:44.162247    6096 start.go:293] postStartSetup for "multinode-788600" (driver="hyperv")
	I0428 18:07:44.162247    6096 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 18:07:44.175650    6096 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 18:07:44.175650    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:07:46.209248    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:07:46.209248    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:46.210099    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:07:48.687148    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:07:48.687148    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:48.687148    6096 sshutil.go:53] new ssh client: &{IP:172.27.231.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:07:48.807572    6096 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6317994s)
	I0428 18:07:48.821850    6096 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 18:07:48.828090    6096 command_runner.go:130] > NAME=Buildroot
	I0428 18:07:48.828205    6096 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0428 18:07:48.828205    6096 command_runner.go:130] > ID=buildroot
	I0428 18:07:48.828205    6096 command_runner.go:130] > VERSION_ID=2023.02.9
	I0428 18:07:48.828205    6096 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0428 18:07:48.828316    6096 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 18:07:48.828316    6096 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 18:07:48.828915    6096 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 18:07:48.829899    6096 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 18:07:48.829968    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 18:07:48.842150    6096 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 18:07:48.859823    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 18:07:48.907482    6096 start.go:296] duration metric: took 4.7452262s for postStartSetup
	I0428 18:07:48.910688    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:07:50.973821    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:07:50.973913    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:50.973994    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:07:53.426665    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:07:53.426933    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:53.427162    6096 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:07:53.430653    6096 start.go:128] duration metric: took 2m0.5862837s to createHost
	I0428 18:07:53.430827    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:07:55.432231    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:07:55.432231    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:55.432513    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:07:57.862369    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:07:57.862588    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:07:57.867431    6096 main.go:141] libmachine: Using SSH client type: native
	I0428 18:07:57.868253    6096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.231.169 22 <nil> <nil>}
	I0428 18:07:57.868253    6096 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 18:07:58.019625    6096 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714352878.010799585
	
	I0428 18:07:58.019693    6096 fix.go:216] guest clock: 1714352878.010799585
	I0428 18:07:58.019693    6096 fix.go:229] Guest: 2024-04-28 18:07:58.010799585 -0700 PDT Remote: 2024-04-28 18:07:53.4307498 -0700 PDT m=+125.874560301 (delta=4.580049785s)
	I0428 18:07:58.019807    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:08:00.038638    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:08:00.039259    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:08:00.039259    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:08:02.507356    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:08:02.507356    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:08:02.514603    6096 main.go:141] libmachine: Using SSH client type: native
	I0428 18:08:02.515322    6096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.231.169 22 <nil> <nil>}
	I0428 18:08:02.515322    6096 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714352878
	I0428 18:08:02.657901    6096 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 01:07:58 UTC 2024
	
	I0428 18:08:02.657955    6096 fix.go:236] clock set: Mon Apr 29 01:07:58 UTC 2024
	 (err=<nil>)
	I0428 18:08:02.657955    6096 start.go:83] releasing machines lock for "multinode-788600", held for 2m9.8137976s
	I0428 18:08:02.658176    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:08:04.669517    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:08:04.669517    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:08:04.669771    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:08:07.106860    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:08:07.107114    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:08:07.111436    6096 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 18:08:07.111591    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:08:07.122412    6096 ssh_runner.go:195] Run: cat /version.json
	I0428 18:08:07.122412    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:08:09.178526    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:08:09.178617    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:08:09.178737    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:08:09.215138    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:08:09.215215    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:08:09.215215    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:08:11.684935    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:08:11.685010    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:08:11.685100    6096 sshutil.go:53] new ssh client: &{IP:172.27.231.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:08:11.754253    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:08:11.754327    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:08:11.754327    6096 sshutil.go:53] new ssh client: &{IP:172.27.231.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:08:11.922474    6096 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0428 18:08:11.922702    6096 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0428 18:08:11.922702    6096 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8111936s)
	I0428 18:08:11.922702    6096 ssh_runner.go:235] Completed: cat /version.json: (4.8002804s)
	I0428 18:08:11.935733    6096 ssh_runner.go:195] Run: systemctl --version
	I0428 18:08:11.945216    6096 command_runner.go:130] > systemd 252 (252)
	I0428 18:08:11.945342    6096 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0428 18:08:11.959104    6096 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 18:08:11.967203    6096 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0428 18:08:11.968497    6096 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 18:08:11.981702    6096 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 18:08:12.007979    6096 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0428 18:08:12.007979    6096 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 18:08:12.008301    6096 start.go:494] detecting cgroup driver to use...
	I0428 18:08:12.008667    6096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 18:08:12.041926    6096 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0428 18:08:12.054766    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 18:08:12.086342    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 18:08:12.107238    6096 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 18:08:12.121090    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 18:08:12.156382    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 18:08:12.190748    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 18:08:12.223252    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 18:08:12.257480    6096 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 18:08:12.289472    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 18:08:12.321717    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 18:08:12.352412    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 18:08:12.383221    6096 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 18:08:12.402193    6096 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0428 18:08:12.417088    6096 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 18:08:12.449685    6096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:08:12.645339    6096 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 18:08:12.683863    6096 start.go:494] detecting cgroup driver to use...
	I0428 18:08:12.696217    6096 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 18:08:12.719236    6096 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0428 18:08:12.719480    6096 command_runner.go:130] > [Unit]
	I0428 18:08:12.719480    6096 command_runner.go:130] > Description=Docker Application Container Engine
	I0428 18:08:12.719480    6096 command_runner.go:130] > Documentation=https://docs.docker.com
	I0428 18:08:12.719480    6096 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0428 18:08:12.719480    6096 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0428 18:08:12.719480    6096 command_runner.go:130] > StartLimitBurst=3
	I0428 18:08:12.719480    6096 command_runner.go:130] > StartLimitIntervalSec=60
	I0428 18:08:12.719573    6096 command_runner.go:130] > [Service]
	I0428 18:08:12.719573    6096 command_runner.go:130] > Type=notify
	I0428 18:08:12.719573    6096 command_runner.go:130] > Restart=on-failure
	I0428 18:08:12.719573    6096 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0428 18:08:12.719659    6096 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0428 18:08:12.719659    6096 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0428 18:08:12.719659    6096 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0428 18:08:12.719659    6096 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0428 18:08:12.719731    6096 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0428 18:08:12.719731    6096 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0428 18:08:12.719731    6096 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0428 18:08:12.719731    6096 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0428 18:08:12.719731    6096 command_runner.go:130] > ExecStart=
	I0428 18:08:12.719822    6096 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0428 18:08:12.719852    6096 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0428 18:08:12.719852    6096 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0428 18:08:12.719852    6096 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0428 18:08:12.719852    6096 command_runner.go:130] > LimitNOFILE=infinity
	I0428 18:08:12.719852    6096 command_runner.go:130] > LimitNPROC=infinity
	I0428 18:08:12.719852    6096 command_runner.go:130] > LimitCORE=infinity
	I0428 18:08:12.719852    6096 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0428 18:08:12.719852    6096 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0428 18:08:12.719852    6096 command_runner.go:130] > TasksMax=infinity
	I0428 18:08:12.719852    6096 command_runner.go:130] > TimeoutStartSec=0
	I0428 18:08:12.719852    6096 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0428 18:08:12.719852    6096 command_runner.go:130] > Delegate=yes
	I0428 18:08:12.719852    6096 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0428 18:08:12.719852    6096 command_runner.go:130] > KillMode=process
	I0428 18:08:12.719852    6096 command_runner.go:130] > [Install]
	I0428 18:08:12.719852    6096 command_runner.go:130] > WantedBy=multi-user.target
	I0428 18:08:12.732890    6096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 18:08:12.769928    6096 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 18:08:12.816806    6096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 18:08:12.851212    6096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 18:08:12.888198    6096 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 18:08:12.950118    6096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 18:08:12.976928    6096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 18:08:13.008868    6096 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0428 18:08:13.020119    6096 ssh_runner.go:195] Run: which cri-dockerd
	I0428 18:08:13.027000    6096 command_runner.go:130] > /usr/bin/cri-dockerd
	I0428 18:08:13.040026    6096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 18:08:13.057996    6096 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 18:08:13.106619    6096 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 18:08:13.314375    6096 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 18:08:13.501879    6096 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 18:08:13.501879    6096 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 18:08:13.548087    6096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:08:13.734648    6096 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 18:08:16.273089    6096 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5384364s)
	I0428 18:08:16.287459    6096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0428 18:08:16.325829    6096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 18:08:16.371701    6096 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0428 18:08:16.583651    6096 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0428 18:08:16.789479    6096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:08:17.017738    6096 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0428 18:08:17.064221    6096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 18:08:17.111005    6096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:08:17.318979    6096 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0428 18:08:17.436257    6096 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0428 18:08:17.448832    6096 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0428 18:08:17.461553    6096 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0428 18:08:17.461553    6096 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0428 18:08:17.461553    6096 command_runner.go:130] > Device: 0,22	Inode: 880         Links: 1
	I0428 18:08:17.461553    6096 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0428 18:08:17.461553    6096 command_runner.go:130] > Access: 2024-04-29 01:08:17.338607465 +0000
	I0428 18:08:17.461553    6096 command_runner.go:130] > Modify: 2024-04-29 01:08:17.338607465 +0000
	I0428 18:08:17.461733    6096 command_runner.go:130] > Change: 2024-04-29 01:08:17.343607490 +0000
	I0428 18:08:17.461733    6096 command_runner.go:130] >  Birth: -
	I0428 18:08:17.461733    6096 start.go:562] Will wait 60s for crictl version
	I0428 18:08:17.475325    6096 ssh_runner.go:195] Run: which crictl
	I0428 18:08:17.481070    6096 command_runner.go:130] > /usr/bin/crictl
	I0428 18:08:17.494466    6096 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 18:08:17.551924    6096 command_runner.go:130] > Version:  0.1.0
	I0428 18:08:17.551924    6096 command_runner.go:130] > RuntimeName:  docker
	I0428 18:08:17.551924    6096 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0428 18:08:17.552022    6096 command_runner.go:130] > RuntimeApiVersion:  v1
	I0428 18:08:17.552022    6096 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0428 18:08:17.563819    6096 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 18:08:17.602070    6096 command_runner.go:130] > 26.0.2
	I0428 18:08:17.612971    6096 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 18:08:17.645892    6096 command_runner.go:130] > 26.0.2
	I0428 18:08:17.651739    6096 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0428 18:08:17.651890    6096 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0428 18:08:17.655840    6096 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0428 18:08:17.656427    6096 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0428 18:08:17.656427    6096 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0428 18:08:17.656427    6096 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d9:46:b5 Flags:up|broadcast|multicast|running}
	I0428 18:08:17.659336    6096 ip.go:210] interface addr: fe80::ddcc:c7f1:f829:ae2f/64
	I0428 18:08:17.659336    6096 ip.go:210] interface addr: 172.27.224.1/20
	I0428 18:08:17.678015    6096 ssh_runner.go:195] Run: grep 172.27.224.1	host.minikube.internal$ /etc/hosts
	I0428 18:08:17.692006    6096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 18:08:17.721764    6096 kubeadm.go:877] updating cluster {Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.231.169 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 18:08:17.721764    6096 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 18:08:17.731528    6096 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 18:08:17.754353    6096 docker.go:685] Got preloaded images: 
	I0428 18:08:17.754353    6096 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0428 18:08:17.767354    6096 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 18:08:17.784000    6096 command_runner.go:139] > {"Repositories":{}}
	I0428 18:08:17.796984    6096 ssh_runner.go:195] Run: which lz4
	I0428 18:08:17.804305    6096 command_runner.go:130] > /usr/bin/lz4
	I0428 18:08:17.804305    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0428 18:08:17.820436    6096 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0428 18:08:17.825468    6096 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0428 18:08:17.826548    6096 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0428 18:08:17.826707    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0428 18:08:20.540517    6096 docker.go:649] duration metric: took 2.7357122s to copy over tarball
	I0428 18:08:20.554184    6096 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0428 18:08:29.068104    6096 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5132396s)
	I0428 18:08:29.068185    6096 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0428 18:08:29.135672    6096 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0428 18:08:29.155772    6096 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.0":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.0":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.0":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e
07f7ac08e80ba0b"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.0":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0428 18:08:29.155772    6096 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0428 18:08:29.209385    6096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:08:29.412599    6096 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 18:08:32.744383    6096 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3317776s)
	I0428 18:08:32.755950    6096 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 18:08:32.777258    6096 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0428 18:08:32.777258    6096 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0428 18:08:32.777258    6096 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0428 18:08:32.777258    6096 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0428 18:08:32.777258    6096 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0428 18:08:32.777258    6096 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0428 18:08:32.777258    6096 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0428 18:08:32.777258    6096 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 18:08:32.779688    6096 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0428 18:08:32.779688    6096 cache_images.go:84] Images are preloaded, skipping loading
	I0428 18:08:32.779688    6096 kubeadm.go:928] updating node { 172.27.231.169 8443 v1.30.0 docker true true} ...
	I0428 18:08:32.780425    6096 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-788600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.231.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 18:08:32.792843    6096 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0428 18:08:32.835130    6096 command_runner.go:130] > cgroupfs
	I0428 18:08:32.836095    6096 cni.go:84] Creating CNI manager for ""
	I0428 18:08:32.837165    6096 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 18:08:32.837165    6096 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 18:08:32.837273    6096 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.231.169 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-788600 NodeName:multinode-788600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.231.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.231.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 18:08:32.837273    6096 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.231.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-788600"
	  kubeletExtraArgs:
	    node-ip: 172.27.231.169
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.231.169"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 18:08:32.851283    6096 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 18:08:32.869567    6096 command_runner.go:130] > kubeadm
	I0428 18:08:32.869567    6096 command_runner.go:130] > kubectl
	I0428 18:08:32.869567    6096 command_runner.go:130] > kubelet
	I0428 18:08:32.870624    6096 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 18:08:32.883530    6096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0428 18:08:32.900653    6096 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0428 18:08:32.934257    6096 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 18:08:32.964410    6096 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0428 18:08:33.007880    6096 ssh_runner.go:195] Run: grep 172.27.231.169	control-plane.minikube.internal$ /etc/hosts
	I0428 18:08:33.014019    6096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.231.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 18:08:33.046519    6096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:08:33.254117    6096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 18:08:33.284398    6096 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600 for IP: 172.27.231.169
	I0428 18:08:33.284489    6096 certs.go:194] generating shared ca certs ...
	I0428 18:08:33.284554    6096 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:08:33.285349    6096 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0428 18:08:33.285349    6096 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0428 18:08:33.285349    6096 certs.go:256] generating profile certs ...
	I0428 18:08:33.286574    6096 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\client.key
	I0428 18:08:33.286737    6096 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\client.crt with IP's: []
	I0428 18:08:33.544727    6096 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\client.crt ...
	I0428 18:08:33.544727    6096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\client.crt: {Name:mk5e5fe014ce4a78912e64986dc34766b6d4aff5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:08:33.546228    6096 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\client.key ...
	I0428 18:08:33.546228    6096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\client.key: {Name:mkefdca1e810a15b9c624b46dff5736993ccc4c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:08:33.546712    6096 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key.7aec23b3
	I0428 18:08:33.547749    6096 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt.7aec23b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.231.169]
	I0428 18:08:33.829961    6096 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt.7aec23b3 ...
	I0428 18:08:33.829961    6096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt.7aec23b3: {Name:mkc955c1be95eb25ac057f6fb524e94596c223e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:08:33.830950    6096 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key.7aec23b3 ...
	I0428 18:08:33.830950    6096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key.7aec23b3: {Name:mkacea9675888cc1e28e1a9a0d5a6fca46c543af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:08:33.831952    6096 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt.7aec23b3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt
	I0428 18:08:33.842945    6096 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key.7aec23b3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key
	I0428 18:08:33.843959    6096 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.key
	I0428 18:08:33.843959    6096 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.crt with IP's: []
	I0428 18:08:34.030532    6096 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.crt ...
	I0428 18:08:34.030532    6096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.crt: {Name:mk21cf70620e82226261086f4dbb7e9dc5ac5c8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:08:34.032203    6096 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.key ...
	I0428 18:08:34.032203    6096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.key: {Name:mk079938cccd97b22eb27e216d61efa60dbf8534 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:08:34.032878    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 18:08:34.033873    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0428 18:08:34.034161    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 18:08:34.034301    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 18:08:34.034301    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 18:08:34.034301    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 18:08:34.034925    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 18:08:34.044591    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 18:08:34.045247    6096 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem (1338 bytes)
	W0428 18:08:34.045969    6096 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228_empty.pem, impossibly tiny 0 bytes
	I0428 18:08:34.045969    6096 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0428 18:08:34.046196    6096 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0428 18:08:34.046448    6096 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0428 18:08:34.046677    6096 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0428 18:08:34.046907    6096 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem (1708 bytes)
	I0428 18:08:34.046907    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem -> /usr/share/ca-certificates/3228.pem
	I0428 18:08:34.047566    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /usr/share/ca-certificates/32282.pem
	I0428 18:08:34.047703    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:08:34.047838    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 18:08:34.099514    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0428 18:08:34.141308    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 18:08:34.184952    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 18:08:34.232109    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0428 18:08:34.288894    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0428 18:08:34.332741    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 18:08:34.380945    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0428 18:08:34.427075    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem --> /usr/share/ca-certificates/3228.pem (1338 bytes)
	I0428 18:08:34.471769    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /usr/share/ca-certificates/32282.pem (1708 bytes)
	I0428 18:08:34.522123    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 18:08:34.563937    6096 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 18:08:34.610436    6096 ssh_runner.go:195] Run: openssl version
	I0428 18:08:34.619192    6096 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0428 18:08:34.631435    6096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32282.pem && ln -fs /usr/share/ca-certificates/32282.pem /etc/ssl/certs/32282.pem"
	I0428 18:08:34.665845    6096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32282.pem
	I0428 18:08:34.675858    6096 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 18:08:34.675858    6096 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 18:08:34.689465    6096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32282.pem
	I0428 18:08:34.699784    6096 command_runner.go:130] > 3ec20f2e
	I0428 18:08:34.713083    6096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32282.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 18:08:34.744848    6096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 18:08:34.776059    6096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:08:34.782885    6096 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:08:34.782994    6096 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:08:34.794312    6096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:08:34.804956    6096 command_runner.go:130] > b5213941
	I0428 18:08:34.818737    6096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 18:08:34.851037    6096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3228.pem && ln -fs /usr/share/ca-certificates/3228.pem /etc/ssl/certs/3228.pem"
	I0428 18:08:34.882579    6096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3228.pem
	I0428 18:08:34.891842    6096 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 18:08:34.892191    6096 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 18:08:34.906102    6096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3228.pem
	I0428 18:08:34.914738    6096 command_runner.go:130] > 51391683
	I0428 18:08:34.930399    6096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3228.pem /etc/ssl/certs/51391683.0"
	I0428 18:08:34.962856    6096 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 18:08:34.968942    6096 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 18:08:34.968942    6096 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 18:08:34.969897    6096 kubeadm.go:391] StartCluster: {Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.231.169 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 18:08:34.978203    6096 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 18:08:35.014894    6096 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 18:08:35.034563    6096 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0428 18:08:35.035571    6096 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0428 18:08:35.035571    6096 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0428 18:08:35.047826    6096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 18:08:35.085278    6096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 18:08:35.101373    6096 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0428 18:08:35.101373    6096 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0428 18:08:35.101373    6096 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0428 18:08:35.101373    6096 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 18:08:35.102532    6096 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 18:08:35.102532    6096 kubeadm.go:156] found existing configuration files:
	
	I0428 18:08:35.114850    6096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 18:08:35.131824    6096 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 18:08:35.132288    6096 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 18:08:35.146538    6096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 18:08:35.176343    6096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 18:08:35.193674    6096 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 18:08:35.194764    6096 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 18:08:35.206207    6096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 18:08:35.239514    6096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 18:08:35.256479    6096 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 18:08:35.257499    6096 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 18:08:35.270572    6096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 18:08:35.303579    6096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 18:08:35.321237    6096 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 18:08:35.321991    6096 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 18:08:35.333738    6096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 18:08:35.351683    6096 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0428 18:08:35.776734    6096 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0428 18:08:35.776734    6096 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0428 18:08:49.467629    6096 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0428 18:08:49.467722    6096 command_runner.go:130] > [init] Using Kubernetes version: v1.30.0
	I0428 18:08:49.467783    6096 command_runner.go:130] > [preflight] Running pre-flight checks
	I0428 18:08:49.467890    6096 kubeadm.go:309] [preflight] Running pre-flight checks
	I0428 18:08:49.468020    6096 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0428 18:08:49.468020    6096 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0428 18:08:49.468020    6096 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0428 18:08:49.468020    6096 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0428 18:08:49.468601    6096 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0428 18:08:49.468601    6096 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0428 18:08:49.468601    6096 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 18:08:49.468601    6096 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 18:08:49.475145    6096 out.go:204]   - Generating certificates and keys ...
	I0428 18:08:49.475330    6096 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0428 18:08:49.475330    6096 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0428 18:08:49.475330    6096 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0428 18:08:49.475330    6096 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0428 18:08:49.475330    6096 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0428 18:08:49.475330    6096 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0428 18:08:49.476070    6096 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0428 18:08:49.476070    6096 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0428 18:08:49.476344    6096 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0428 18:08:49.476365    6096 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0428 18:08:49.476429    6096 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0428 18:08:49.476429    6096 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0428 18:08:49.476429    6096 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0428 18:08:49.476429    6096 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0428 18:08:49.476981    6096 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-788600] and IPs [172.27.231.169 127.0.0.1 ::1]
	I0428 18:08:49.476981    6096 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-788600] and IPs [172.27.231.169 127.0.0.1 ::1]
	I0428 18:08:49.477139    6096 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0428 18:08:49.477139    6096 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0428 18:08:49.477139    6096 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-788600] and IPs [172.27.231.169 127.0.0.1 ::1]
	I0428 18:08:49.477139    6096 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-788600] and IPs [172.27.231.169 127.0.0.1 ::1]
	I0428 18:08:49.477844    6096 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0428 18:08:49.477881    6096 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0428 18:08:49.477881    6096 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0428 18:08:49.477881    6096 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0428 18:08:49.477881    6096 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0428 18:08:49.477881    6096 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0428 18:08:49.477881    6096 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 18:08:49.477881    6096 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 18:08:49.478635    6096 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 18:08:49.478635    6096 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 18:08:49.478635    6096 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 18:08:49.478635    6096 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 18:08:49.478635    6096 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 18:08:49.478635    6096 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 18:08:49.479170    6096 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 18:08:49.479229    6096 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 18:08:49.479229    6096 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 18:08:49.479229    6096 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 18:08:49.479229    6096 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 18:08:49.479229    6096 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 18:08:49.479229    6096 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 18:08:49.479749    6096 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 18:08:49.483301    6096 out.go:204]   - Booting up control plane ...
	I0428 18:08:49.483301    6096 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 18:08:49.483301    6096 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 18:08:49.483859    6096 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 18:08:49.483859    6096 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 18:08:49.483859    6096 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 18:08:49.483859    6096 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 18:08:49.484461    6096 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 18:08:49.484461    6096 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 18:08:49.484461    6096 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 18:08:49.484461    6096 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 18:08:49.484461    6096 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0428 18:08:49.484461    6096 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0428 18:08:49.485132    6096 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0428 18:08:49.485132    6096 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0428 18:08:49.485132    6096 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0428 18:08:49.485132    6096 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0428 18:08:49.485132    6096 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 510.151975ms
	I0428 18:08:49.485132    6096 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 510.151975ms
	I0428 18:08:49.485132    6096 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0428 18:08:49.485699    6096 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0428 18:08:49.485903    6096 kubeadm.go:309] [api-check] The API server is healthy after 7.002705387s
	I0428 18:08:49.485903    6096 command_runner.go:130] > [api-check] The API server is healthy after 7.002705387s
	I0428 18:08:49.485903    6096 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0428 18:08:49.485903    6096 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0428 18:08:49.486423    6096 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0428 18:08:49.486477    6096 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0428 18:08:49.486629    6096 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0428 18:08:49.486687    6096 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0428 18:08:49.486749    6096 command_runner.go:130] > [mark-control-plane] Marking the node multinode-788600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0428 18:08:49.486749    6096 kubeadm.go:309] [mark-control-plane] Marking the node multinode-788600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0428 18:08:49.486749    6096 command_runner.go:130] > [bootstrap-token] Using token: 6vduv5.3gzxzsmnb3stie7j
	I0428 18:08:49.486749    6096 kubeadm.go:309] [bootstrap-token] Using token: 6vduv5.3gzxzsmnb3stie7j
	I0428 18:08:49.490433    6096 out.go:204]   - Configuring RBAC rules ...
	I0428 18:08:49.491148    6096 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0428 18:08:49.491148    6096 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0428 18:08:49.491148    6096 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0428 18:08:49.491148    6096 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0428 18:08:49.491799    6096 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0428 18:08:49.491799    6096 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0428 18:08:49.491799    6096 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0428 18:08:49.491799    6096 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0428 18:08:49.491799    6096 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0428 18:08:49.492345    6096 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0428 18:08:49.492504    6096 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0428 18:08:49.492504    6096 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0428 18:08:49.492504    6096 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0428 18:08:49.492504    6096 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0428 18:08:49.492504    6096 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0428 18:08:49.492504    6096 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0428 18:08:49.493100    6096 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0428 18:08:49.493100    6096 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0428 18:08:49.493100    6096 kubeadm.go:309] 
	I0428 18:08:49.493191    6096 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0428 18:08:49.493191    6096 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0428 18:08:49.493191    6096 kubeadm.go:309] 
	I0428 18:08:49.493191    6096 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0428 18:08:49.493191    6096 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0428 18:08:49.493191    6096 kubeadm.go:309] 
	I0428 18:08:49.493191    6096 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0428 18:08:49.493191    6096 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0428 18:08:49.493191    6096 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0428 18:08:49.493191    6096 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0428 18:08:49.493191    6096 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0428 18:08:49.493191    6096 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0428 18:08:49.493191    6096 kubeadm.go:309] 
	I0428 18:08:49.494088    6096 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0428 18:08:49.494088    6096 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0428 18:08:49.494088    6096 kubeadm.go:309] 
	I0428 18:08:49.494088    6096 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0428 18:08:49.494088    6096 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0428 18:08:49.494088    6096 kubeadm.go:309] 
	I0428 18:08:49.494088    6096 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0428 18:08:49.494088    6096 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0428 18:08:49.494088    6096 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0428 18:08:49.494088    6096 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0428 18:08:49.494088    6096 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0428 18:08:49.494088    6096 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0428 18:08:49.494088    6096 kubeadm.go:309] 
	I0428 18:08:49.494088    6096 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0428 18:08:49.494088    6096 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0428 18:08:49.495088    6096 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0428 18:08:49.495088    6096 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0428 18:08:49.495088    6096 kubeadm.go:309] 
	I0428 18:08:49.495088    6096 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 6vduv5.3gzxzsmnb3stie7j \
	I0428 18:08:49.495088    6096 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6vduv5.3gzxzsmnb3stie7j \
	I0428 18:08:49.495088    6096 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c \
	I0428 18:08:49.495088    6096 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c \
	I0428 18:08:49.495677    6096 kubeadm.go:309] 	--control-plane 
	I0428 18:08:49.495677    6096 command_runner.go:130] > 	--control-plane 
	I0428 18:08:49.495677    6096 kubeadm.go:309] 
	I0428 18:08:49.495774    6096 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0428 18:08:49.495774    6096 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0428 18:08:49.495774    6096 kubeadm.go:309] 
	I0428 18:08:49.495774    6096 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6vduv5.3gzxzsmnb3stie7j \
	I0428 18:08:49.495774    6096 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 6vduv5.3gzxzsmnb3stie7j \
	I0428 18:08:49.495774    6096 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c 
	I0428 18:08:49.495774    6096 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c 
	I0428 18:08:49.495774    6096 cni.go:84] Creating CNI manager for ""
	I0428 18:08:49.495774    6096 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 18:08:49.498724    6096 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0428 18:08:49.514726    6096 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0428 18:08:49.522567    6096 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0428 18:08:49.522567    6096 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0428 18:08:49.522567    6096 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0428 18:08:49.522567    6096 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0428 18:08:49.522567    6096 command_runner.go:130] > Access: 2024-04-29 01:06:56.360847800 +0000
	I0428 18:08:49.522761    6096 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0428 18:08:49.522787    6096 command_runner.go:130] > Change: 2024-04-28 18:06:47.652000000 +0000
	I0428 18:08:49.522787    6096 command_runner.go:130] >  Birth: -
	I0428 18:08:49.522857    6096 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0428 18:08:49.522922    6096 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0428 18:08:49.575144    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0428 18:08:50.148298    6096 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0428 18:08:50.148392    6096 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0428 18:08:50.148392    6096 command_runner.go:130] > serviceaccount/kindnet created
	I0428 18:08:50.148392    6096 command_runner.go:130] > daemonset.apps/kindnet created
	I0428 18:08:50.148392    6096 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 18:08:50.162986    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:50.162986    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-788600 minikube.k8s.io/updated_at=2024_04_28T18_08_50_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=multinode-788600 minikube.k8s.io/primary=true
	I0428 18:08:50.197012    6096 command_runner.go:130] > -16
	I0428 18:08:50.198689    6096 ops.go:34] apiserver oom_adj: -16
	I0428 18:08:50.479807    6096 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0428 18:08:50.483881    6096 command_runner.go:130] > node/multinode-788600 labeled
	I0428 18:08:50.497233    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:50.606845    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:51.003610    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:51.114397    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:51.507427    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:51.620642    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:52.009612    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:52.117242    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:52.509638    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:52.617918    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:53.012300    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:53.131500    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:53.511301    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:53.624805    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:54.011026    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:54.126195    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:54.496246    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:54.608882    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:54.998702    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:55.115482    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:55.508608    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:55.624858    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:56.011466    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:56.131661    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:56.504064    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:56.618000    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:57.008115    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:57.130144    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:57.507999    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:57.619698    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:58.005932    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:58.129115    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:58.508605    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:58.614576    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:58.997198    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:59.133236    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:08:59.497486    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:08:59.627438    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:09:00.007944    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:09:00.125926    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:09:00.503132    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:09:00.610260    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:09:01.009176    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:09:01.139381    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:09:01.497948    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:09:01.611319    6096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0428 18:09:02.001115    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 18:09:02.147155    6096 command_runner.go:130] > NAME      SECRETS   AGE
	I0428 18:09:02.147236    6096 command_runner.go:130] > default   0         0s
	I0428 18:09:02.147236    6096 kubeadm.go:1107] duration metric: took 11.9988207s to wait for elevateKubeSystemPrivileges
	W0428 18:09:02.147236    6096 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0428 18:09:02.147236    6096 kubeadm.go:393] duration metric: took 27.1772868s to StartCluster
	I0428 18:09:02.147236    6096 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:09:02.147236    6096 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:09:02.149242    6096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:09:02.150709    6096 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0428 18:09:02.150830    6096 start.go:234] Will wait 6m0s for node &{Name: IP:172.27.231.169 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 18:09:02.154043    6096 out.go:177] * Verifying Kubernetes components...
	I0428 18:09:02.150965    6096 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0428 18:09:02.151567    6096 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:09:02.154135    6096 addons.go:69] Setting storage-provisioner=true in profile "multinode-788600"
	I0428 18:09:02.154225    6096 addons.go:234] Setting addon storage-provisioner=true in "multinode-788600"
	I0428 18:09:02.154225    6096 addons.go:69] Setting default-storageclass=true in profile "multinode-788600"
	I0428 18:09:02.154319    6096 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-788600"
	I0428 18:09:02.154402    6096 host.go:66] Checking if "multinode-788600" exists ...
	I0428 18:09:02.157606    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:09:02.158355    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:09:02.170792    6096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:09:02.419427    6096 command_runner.go:130] > apiVersion: v1
	I0428 18:09:02.419427    6096 command_runner.go:130] > data:
	I0428 18:09:02.419427    6096 command_runner.go:130] >   Corefile: |
	I0428 18:09:02.419427    6096 command_runner.go:130] >     .:53 {
	I0428 18:09:02.419427    6096 command_runner.go:130] >         errors
	I0428 18:09:02.419427    6096 command_runner.go:130] >         health {
	I0428 18:09:02.419427    6096 command_runner.go:130] >            lameduck 5s
	I0428 18:09:02.419536    6096 command_runner.go:130] >         }
	I0428 18:09:02.419536    6096 command_runner.go:130] >         ready
	I0428 18:09:02.419536    6096 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0428 18:09:02.419536    6096 command_runner.go:130] >            pods insecure
	I0428 18:09:02.419619    6096 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0428 18:09:02.419619    6096 command_runner.go:130] >            ttl 30
	I0428 18:09:02.419653    6096 command_runner.go:130] >         }
	I0428 18:09:02.419653    6096 command_runner.go:130] >         prometheus :9153
	I0428 18:09:02.419653    6096 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0428 18:09:02.419653    6096 command_runner.go:130] >            max_concurrent 1000
	I0428 18:09:02.419653    6096 command_runner.go:130] >         }
	I0428 18:09:02.419653    6096 command_runner.go:130] >         cache 30
	I0428 18:09:02.419729    6096 command_runner.go:130] >         loop
	I0428 18:09:02.419729    6096 command_runner.go:130] >         reload
	I0428 18:09:02.419729    6096 command_runner.go:130] >         loadbalance
	I0428 18:09:02.419729    6096 command_runner.go:130] >     }
	I0428 18:09:02.419729    6096 command_runner.go:130] > kind: ConfigMap
	I0428 18:09:02.419794    6096 command_runner.go:130] > metadata:
	I0428 18:09:02.419794    6096 command_runner.go:130] >   creationTimestamp: "2024-04-29T01:08:48Z"
	I0428 18:09:02.419794    6096 command_runner.go:130] >   name: coredns
	I0428 18:09:02.419836    6096 command_runner.go:130] >   namespace: kube-system
	I0428 18:09:02.419836    6096 command_runner.go:130] >   resourceVersion: "240"
	I0428 18:09:02.419836    6096 command_runner.go:130] >   uid: 87332b27-268f-442c-b1c3-6878b598db2d
	I0428 18:09:02.420290    6096 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0428 18:09:02.562286    6096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 18:09:03.030536    6096 command_runner.go:130] > configmap/coredns replaced
	I0428 18:09:03.030536    6096 start.go:946] {"host.minikube.internal": 172.27.224.1} host record injected into CoreDNS's ConfigMap
	I0428 18:09:03.032088    6096 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:09:03.032088    6096 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:09:03.033011    6096 kapi.go:59] client config for multinode-788600: &rest.Config{Host:"https://172.27.231.169:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-788600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-788600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 18:09:03.033212    6096 kapi.go:59] client config for multinode-788600: &rest.Config{Host:"https://172.27.231.169:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-788600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-788600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 18:09:03.034455    6096 cert_rotation.go:137] Starting client certificate rotation controller
	I0428 18:09:03.035011    6096 node_ready.go:35] waiting up to 6m0s for node "multinode-788600" to be "Ready" ...
	I0428 18:09:03.035174    6096 round_trippers.go:463] GET https://172.27.231.169:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0428 18:09:03.035223    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:03.035223    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:03.035223    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:03.035223    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:03.035223    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:03.035391    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:03.035515    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:03.058371    6096 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0428 18:09:03.058371    6096 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0428 18:09:03.058972    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:03.058972    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:03.058972    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:03 GMT
	I0428 18:09:03.058972    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:03 GMT
	I0428 18:09:03.058972    6096 round_trippers.go:580]     Audit-Id: 1d1d876a-57b1-42e0-890c-8fed84cd0e0c
	I0428 18:09:03.059044    6096 round_trippers.go:580]     Audit-Id: 9ac258fe-9a30-48ea-b4a8-6a27f7584549
	I0428 18:09:03.059131    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:03.059044    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:03.059131    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:03.059207    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:03.059207    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:03.059131    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:03.059277    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:03.059277    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:03.059277    6096 round_trippers.go:580]     Content-Length: 291
	I0428 18:09:03.059485    6096 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2fe500f8-32b3-41bc-997c-e504ea6b3a06","resourceVersion":"336","creationTimestamp":"2024-04-29T01:08:48Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0428 18:09:03.059642    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:03.060827    6096 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2fe500f8-32b3-41bc-997c-e504ea6b3a06","resourceVersion":"336","creationTimestamp":"2024-04-29T01:08:48Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0428 18:09:03.061056    6096 round_trippers.go:463] PUT https://172.27.231.169:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0428 18:09:03.061116    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:03.061116    6096 round_trippers.go:473]     Content-Type: application/json
	I0428 18:09:03.061116    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:03.061116    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:03.092104    6096 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0428 18:09:03.092238    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:03.092238    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:03.092238    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:03.092238    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:03.092238    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:03.092238    6096 round_trippers.go:580]     Content-Length: 291
	I0428 18:09:03.092315    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:03 GMT
	I0428 18:09:03.092315    6096 round_trippers.go:580]     Audit-Id: 37f1c2ae-ef1a-4875-b3d9-d77de70e500a
	I0428 18:09:03.093577    6096 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2fe500f8-32b3-41bc-997c-e504ea6b3a06","resourceVersion":"366","creationTimestamp":"2024-04-29T01:08:48Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0428 18:09:03.541851    6096 round_trippers.go:463] GET https://172.27.231.169:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0428 18:09:03.541920    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:03.541920    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:03.541920    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:03.541971    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:03.541971    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:03.542099    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:03.542099    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:03.544747    6096 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:09:03.545755    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:03.545828    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:09:03.545828    6096 round_trippers.go:580]     Audit-Id: 06edca32-afef-4f57-ad59-af547025e29b
	I0428 18:09:03.545828    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:03.545828    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:03.545828    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:03.545828    6096 round_trippers.go:580]     Audit-Id: 2f0de776-d257-4832-9a0c-c4987e3de4c5
	I0428 18:09:03.545828    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:03.545828    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:03.545828    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:03.545956    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:03.545956    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:03.545956    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:03.546054    6096 round_trippers.go:580]     Content-Length: 291
	I0428 18:09:03.546054    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:03 GMT
	I0428 18:09:03.545956    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:03 GMT
	I0428 18:09:03.546054    6096 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2fe500f8-32b3-41bc-997c-e504ea6b3a06","resourceVersion":"391","creationTimestamp":"2024-04-29T01:08:48Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0428 18:09:03.546054    6096 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-788600" context rescaled to 1 replicas
	I0428 18:09:03.546054    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:04.048803    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:04.049046    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:04.049046    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:04.049129    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:04.053985    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:09:04.053985    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:04.053985    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:04 GMT
	I0428 18:09:04.053985    6096 round_trippers.go:580]     Audit-Id: 031b8d91-446d-4fc5-83b0-4f2e0e1b65d9
	I0428 18:09:04.053985    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:04.053985    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:04.053985    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:04.053985    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:04.053985    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:04.361480    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:09:04.361480    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:04.361480    6096 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 18:09:04.366874    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:09:04.367838    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:04.368029    6096 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 18:09:04.368073    6096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0428 18:09:04.368073    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:09:04.369005    6096 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:09:04.369700    6096 kapi.go:59] client config for multinode-788600: &rest.Config{Host:"https://172.27.231.169:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-788600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-788600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 18:09:04.370583    6096 addons.go:234] Setting addon default-storageclass=true in "multinode-788600"
	I0428 18:09:04.370695    6096 host.go:66] Checking if "multinode-788600" exists ...
	I0428 18:09:04.371816    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:09:04.540741    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:04.540863    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:04.540863    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:04.540863    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:04.544238    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:09:04.544607    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:04.544607    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:04 GMT
	I0428 18:09:04.544607    6096 round_trippers.go:580]     Audit-Id: b0967279-582a-47b4-8a12-1c3aa4c310c3
	I0428 18:09:04.544607    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:04.544699    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:04.544699    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:04.544699    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:04.544951    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:05.047436    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:05.047436    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:05.047644    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:05.047644    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:05.053663    6096 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:09:05.053663    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:05.053663    6096 round_trippers.go:580]     Audit-Id: b70b8313-8c8c-4a2d-87e3-9597953a72f5
	I0428 18:09:05.053663    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:05.053663    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:05.053663    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:05.053663    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:05.053663    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:05 GMT
	I0428 18:09:05.053663    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:05.054492    6096 node_ready.go:53] node "multinode-788600" has status "Ready":"False"
	I0428 18:09:05.537270    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:05.537270    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:05.537270    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:05.537270    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:05.540974    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:09:05.540974    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:05.540974    6096 round_trippers.go:580]     Audit-Id: cbaa9260-2dd7-4727-8ad9-3ea632775cc7
	I0428 18:09:05.540974    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:05.541822    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:05.541822    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:05.541822    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:05.541822    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:05 GMT
	I0428 18:09:05.542115    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:06.041657    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:06.041657    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:06.041657    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:06.042702    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:06.078978    6096 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0428 18:09:06.079398    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:06.079398    6096 round_trippers.go:580]     Audit-Id: 32d49fb8-bbee-4f80-877a-62ca16e2c4a1
	I0428 18:09:06.079470    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:06.079470    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:06.079470    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:06.079470    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:06.079470    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:06 GMT
	I0428 18:09:06.079470    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:06.535144    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:06.535144    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:06.535144    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:06.535144    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:06.539148    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:09:06.539148    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:06.539148    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:06.539148    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:06 GMT
	I0428 18:09:06.539148    6096 round_trippers.go:580]     Audit-Id: 6b5acbca-735f-4bfd-adee-0f2c8dbffc94
	I0428 18:09:06.539148    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:06.539148    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:06.539148    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:06.539148    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:06.600442    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:09:06.600442    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:06.600524    6096 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0428 18:09:06.600524    6096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0428 18:09:06.600524    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:09:06.696990    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:09:06.697525    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:06.697525    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:09:07.040707    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:07.040707    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:07.040800    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:07.040800    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:07.044323    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:09:07.044323    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:07.044323    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:07.044323    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:07 GMT
	I0428 18:09:07.044323    6096 round_trippers.go:580]     Audit-Id: 2b56180f-0a40-4b7b-943b-1905f8952a82
	I0428 18:09:07.044323    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:07.044323    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:07.044323    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:07.044323    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:07.549316    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:07.549316    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:07.549316    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:07.549316    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:07.552933    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:09:07.553643    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:07.553722    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:07.553794    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:07.553861    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:07.553861    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:07.553921    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:07 GMT
	I0428 18:09:07.553921    6096 round_trippers.go:580]     Audit-Id: 31136979-fdec-4ae4-9be5-3fdc3b11e665
	I0428 18:09:07.554267    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:07.555114    6096 node_ready.go:53] node "multinode-788600" has status "Ready":"False"
	I0428 18:09:08.046046    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:08.046046    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:08.046046    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:08.046046    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:08.056202    6096 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0428 18:09:08.056202    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:08.056202    6096 round_trippers.go:580]     Audit-Id: bc297c65-a6da-4426-9f05-898c0137dfeb
	I0428 18:09:08.056202    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:08.056202    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:08.056202    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:08.056202    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:08.056202    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:08 GMT
	I0428 18:09:08.056202    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:08.537871    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:08.537871    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:08.537871    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:08.538127    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:08.543067    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:09:08.543067    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:08.543067    6096 round_trippers.go:580]     Audit-Id: 78716f73-375f-45a9-9182-09d11656e23c
	I0428 18:09:08.543067    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:08.543067    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:08.543067    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:08.543067    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:08.543067    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:08 GMT
	I0428 18:09:08.543849    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:08.770027    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:09:08.770206    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:08.770206    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:09:09.039196    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:09.039260    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:09.039260    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:09.039260    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:09.056422    6096 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0428 18:09:09.056488    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:09.056488    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:09.056488    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:09.056488    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:09 GMT
	I0428 18:09:09.056488    6096 round_trippers.go:580]     Audit-Id: 4b31f4b7-b2b5-43f0-b446-33c753e9cf5d
	I0428 18:09:09.056653    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:09.056653    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:09.057005    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:09.296376    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:09:09.297489    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:09.297656    6096 sshutil.go:53] new ssh client: &{IP:172.27.231.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:09:09.462339    6096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 18:09:09.543550    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:09.543550    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:09.543550    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:09.543550    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:09.554590    6096 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0428 18:09:09.554590    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:09.554590    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:09.554590    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:09 GMT
	I0428 18:09:09.554590    6096 round_trippers.go:580]     Audit-Id: d6da0d64-c8ec-48ae-8989-1eea00d9b573
	I0428 18:09:09.554590    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:09.554590    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:09.554590    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:09.554590    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:09.556085    6096 node_ready.go:53] node "multinode-788600" has status "Ready":"False"
	I0428 18:09:10.036438    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:10.036526    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:10.036526    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:10.036526    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:10.038858    6096 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:09:10.038858    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:10.038858    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:10.038858    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:10 GMT
	I0428 18:09:10.038858    6096 round_trippers.go:580]     Audit-Id: d6ad608e-45ec-413d-a9a2-6c93bdd26712
	I0428 18:09:10.038858    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:10.038858    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:10.038858    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:10.039894    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:10.230888    6096 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0428 18:09:10.230956    6096 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0428 18:09:10.230956    6096 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0428 18:09:10.230956    6096 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0428 18:09:10.230956    6096 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0428 18:09:10.230956    6096 command_runner.go:130] > pod/storage-provisioner created
	I0428 18:09:10.537042    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:10.537042    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:10.537042    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:10.537042    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:10.544391    6096 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0428 18:09:10.544391    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:10.544391    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:10.544509    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:10.544509    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:10.544509    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:10.544509    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:10 GMT
	I0428 18:09:10.544509    6096 round_trippers.go:580]     Audit-Id: b42cb95f-1f99-418d-aac6-fc28d1a4f969
	I0428 18:09:10.544573    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:11.044369    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:11.044369    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:11.044369    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:11.044459    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:11.049024    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:09:11.049709    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:11.049709    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:11.049709    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:11.049709    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:11.049709    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:11 GMT
	I0428 18:09:11.049709    6096 round_trippers.go:580]     Audit-Id: 362b808a-212e-40ec-bbd9-a4394e328cc7
	I0428 18:09:11.049709    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:11.051412    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:11.334286    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:09:11.334429    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:11.334582    6096 sshutil.go:53] new ssh client: &{IP:172.27.231.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:09:11.474320    6096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0428 18:09:11.547684    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:11.547684    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:11.547684    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:11.547684    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:11.551468    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:09:11.551468    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:11.551468    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:11.551468    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:11 GMT
	I0428 18:09:11.551468    6096 round_trippers.go:580]     Audit-Id: a1e947d9-974f-49a5-8c16-6e11da53b33f
	I0428 18:09:11.551468    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:11.551468    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:11.551468    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:11.552227    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:11.627684    6096 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0428 18:09:11.628657    6096 round_trippers.go:463] GET https://172.27.231.169:8443/apis/storage.k8s.io/v1/storageclasses
	I0428 18:09:11.628657    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:11.628657    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:11.628657    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:11.632703    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:09:11.633103    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:11.633103    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:11.633269    6096 round_trippers.go:580]     Content-Length: 1273
	I0428 18:09:11.633385    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:11 GMT
	I0428 18:09:11.633544    6096 round_trippers.go:580]     Audit-Id: 2db0d91b-3ec3-42fd-9751-f356304ac408
	I0428 18:09:11.633544    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:11.633544    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:11.633544    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:11.633544    6096 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"standard","uid":"8c11ff24-e4a0-41ca-b16d-318d83e4f5e8","resourceVersion":"418","creationTimestamp":"2024-04-29T01:09:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T01:09:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0428 18:09:11.634316    6096 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"8c11ff24-e4a0-41ca-b16d-318d83e4f5e8","resourceVersion":"418","creationTimestamp":"2024-04-29T01:09:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T01:09:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0428 18:09:11.634316    6096 round_trippers.go:463] PUT https://172.27.231.169:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0428 18:09:11.634316    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:11.634316    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:11.634316    6096 round_trippers.go:473]     Content-Type: application/json
	I0428 18:09:11.634316    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:11.638233    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:09:11.638233    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:11.638609    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:11.638609    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:11.638609    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:11.638685    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:11.638799    6096 round_trippers.go:580]     Content-Length: 1220
	I0428 18:09:11.638890    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:11 GMT
	I0428 18:09:11.638890    6096 round_trippers.go:580]     Audit-Id: 1abf79e9-5d81-4eeb-97b0-61c1354ea0e7
	I0428 18:09:11.638890    6096 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"8c11ff24-e4a0-41ca-b16d-318d83e4f5e8","resourceVersion":"418","creationTimestamp":"2024-04-29T01:09:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T01:09:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0428 18:09:11.644236    6096 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0428 18:09:11.646185    6096 addons.go:505] duration metric: took 9.4953024s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0428 18:09:12.050022    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:12.050022    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:12.050022    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:12.050022    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:12.054663    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:09:12.054663    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:12.054663    6096 round_trippers.go:580]     Audit-Id: c27cab77-1547-44f1-b40a-619eb85102f1
	I0428 18:09:12.054663    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:12.054663    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:12.054663    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:12.054951    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:12.054951    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:12 GMT
	I0428 18:09:12.055271    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"337","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0428 18:09:12.055271    6096 node_ready.go:53] node "multinode-788600" has status "Ready":"False"
	I0428 18:09:12.536393    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:12.536492    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:12.536492    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:12.536492    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:12.542982    6096 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:09:12.542982    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:12.542982    6096 round_trippers.go:580]     Audit-Id: 6061be3b-afb7-44d6-b484-69ce591dfd40
	I0428 18:09:12.543132    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:12.543132    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:12.543132    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:12.543132    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:12.543132    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:12 GMT
	I0428 18:09:12.543592    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"423","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0428 18:09:12.544142    6096 node_ready.go:49] node "multinode-788600" has status "Ready":"True"
	I0428 18:09:12.544198    6096 node_ready.go:38] duration metric: took 9.5090298s for node "multinode-788600" to be "Ready" ...
	I0428 18:09:12.544198    6096 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 18:09:12.544311    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods
	I0428 18:09:12.544366    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:12.544391    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:12.544391    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:12.549487    6096 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:09:12.549487    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:12.549487    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:12.549487    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:12.549631    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:12.549631    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:12.549631    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:12 GMT
	I0428 18:09:12.549631    6096 round_trippers.go:580]     Audit-Id: d4580697-bbaf-42a7-9822-42d8cf16f76c
	I0428 18:09:12.551334    6096 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"428","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54515 chars]
	I0428 18:09:12.555500    6096 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace to be "Ready" ...
	I0428 18:09:12.555500    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:09:12.555500    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:12.555500    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:12.555500    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:12.563159    6096 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0428 18:09:12.563159    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:12.563159    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:12 GMT
	I0428 18:09:12.563159    6096 round_trippers.go:580]     Audit-Id: ea96b2d4-39aa-4ce6-a5de-1874cce50cb9
	I0428 18:09:12.563159    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:12.563159    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:12.563159    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:12.563725    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:12.563921    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"428","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0428 18:09:12.564748    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:12.564807    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:12.564807    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:12.564807    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:12.570182    6096 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:09:12.570182    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:12.570182    6096 round_trippers.go:580]     Audit-Id: 72e0b918-cc91-4ac0-be18-f524fc576777
	I0428 18:09:12.570182    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:12.570658    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:12.570658    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:12.570658    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:12.570658    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:12 GMT
	I0428 18:09:12.570991    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"423","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0428 18:09:13.057230    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:09:13.057230    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:13.057230    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:13.057230    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:13.063865    6096 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:09:13.063865    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:13.063865    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:13 GMT
	I0428 18:09:13.063865    6096 round_trippers.go:580]     Audit-Id: a96e8870-0808-47a1-a6b8-b65d7da6deec
	I0428 18:09:13.063865    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:13.063865    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:13.063865    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:13.063865    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:13.063865    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"428","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0428 18:09:13.064795    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:13.064795    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:13.064795    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:13.064795    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:13.068843    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:09:13.068843    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:13.068843    6096 round_trippers.go:580]     Audit-Id: fabfc486-1f60-40bb-b444-552fabd40182
	I0428 18:09:13.068843    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:13.068843    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:13.068843    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:13.068843    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:13.068843    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:13 GMT
	I0428 18:09:13.069620    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"423","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0428 18:09:13.565455    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:09:13.565455    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:13.565455    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:13.565455    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:13.571034    6096 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:09:13.571034    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:13.571034    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:13.571034    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:13.571034    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:13 GMT
	I0428 18:09:13.571034    6096 round_trippers.go:580]     Audit-Id: b73ce894-cd39-497e-b316-96ae7357a2a8
	I0428 18:09:13.571716    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:13.571716    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:13.571716    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"428","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0428 18:09:13.572826    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:13.572826    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:13.572826    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:13.572892    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:13.588330    6096 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0428 18:09:13.588781    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:13.588781    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:13 GMT
	I0428 18:09:13.588781    6096 round_trippers.go:580]     Audit-Id: c1fb4a12-04c8-43c4-bf82-52d0058fdde2
	I0428 18:09:13.588781    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:13.588781    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:13.588781    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:13.588781    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:13.588966    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"423","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0428 18:09:14.056714    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:09:14.056714    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:14.056714    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:14.056714    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:14.068793    6096 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0428 18:09:14.068869    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:14.068869    6096 round_trippers.go:580]     Audit-Id: fb4ed27b-51a4-4659-8c17-d21d25c1660c
	I0428 18:09:14.068869    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:14.068869    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:14.068869    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:14.068869    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:14.068869    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:14 GMT
	I0428 18:09:14.068998    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"428","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0428 18:09:14.070293    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:14.070356    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:14.070356    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:14.070419    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:14.073593    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:09:14.073593    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:14.073593    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:14.073593    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:14 GMT
	I0428 18:09:14.073593    6096 round_trippers.go:580]     Audit-Id: 45bb31da-8ebb-46cb-b113-d8998c803292
	I0428 18:09:14.074380    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:14.074380    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:14.074380    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:14.075095    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"423","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0428 18:09:14.570769    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:09:14.571099    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:14.571099    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:14.571099    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:14.574927    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:09:14.575829    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:14.575829    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:14.575829    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:14.575829    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:14.575829    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:14 GMT
	I0428 18:09:14.575829    6096 round_trippers.go:580]     Audit-Id: 00458c0b-77cb-41eb-8edb-72691838831c
	I0428 18:09:14.575829    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:14.576095    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"428","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0428 18:09:14.576787    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:14.576787    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:14.576787    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:14.576787    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:14.579426    6096 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:09:14.579426    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:14.579426    6096 round_trippers.go:580]     Audit-Id: be870ab5-1a0d-4a96-bd92-633e956f7647
	I0428 18:09:14.579426    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:14.579426    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:14.579426    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:14.579426    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:14.579426    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:14 GMT
	I0428 18:09:14.580380    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"423","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0428 18:09:14.580852    6096 pod_ready.go:102] pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace has status "Ready":"False"
	I0428 18:09:15.056215    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:09:15.056277    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:15.056277    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:15.056277    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:15.062809    6096 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:09:15.062809    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:15.062809    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:15.062809    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:15.062809    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:15.062809    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:15.062809    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:15 GMT
	I0428 18:09:15.062898    6096 round_trippers.go:580]     Audit-Id: c784f978-6cf2-488b-a3e4-b9ccd8928b54
	I0428 18:09:15.063117    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"442","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0428 18:09:15.063398    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:15.063398    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:15.063398    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:15.063979    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:15.070904    6096 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:09:15.071101    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:15.071101    6096 round_trippers.go:580]     Audit-Id: 1919482c-3664-4e07-93d7-761410c65612
	I0428 18:09:15.071101    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:15.071101    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:15.071101    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:15.071179    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:15.071179    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:15 GMT
	I0428 18:09:15.071179    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"423","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0428 18:09:15.071807    6096 pod_ready.go:92] pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace has status "Ready":"True"
	I0428 18:09:15.071807    6096 pod_ready.go:81] duration metric: took 2.5163022s for pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace to be "Ready" ...
	I0428 18:09:15.071807    6096 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:09:15.071807    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-788600
	I0428 18:09:15.071807    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:15.071807    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:15.071807    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:15.075271    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:09:15.075271    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:15.075271    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:15.075271    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:15.075271    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:15 GMT
	I0428 18:09:15.075271    6096 round_trippers.go:580]     Audit-Id: 1254f2be-a7cd-4328-b138-1f1d6a399bdd
	I0428 18:09:15.075271    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:15.075271    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:15.075271    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-788600","namespace":"kube-system","uid":"9d0f8c4f-569f-4a80-8960-2210a5a24612","resourceVersion":"402","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.231.169:2379","kubernetes.io/config.hash":"589ef16acbcd1b3600cffadabab7475a","kubernetes.io/config.mirror":"589ef16acbcd1b3600cffadabab7475a","kubernetes.io/config.seen":"2024-04-29T01:08:48.885063333Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0428 18:09:15.076172    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:15.076228    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:15.076228    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:15.076228    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:15.078455    6096 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:09:15.078455    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:15.078455    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:15.078455    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:15.078455    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:15 GMT
	I0428 18:09:15.078455    6096 round_trippers.go:580]     Audit-Id: 3382da74-bf47-4341-8e3b-781b855fc9f8
	I0428 18:09:15.078455    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:15.078455    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:15.079629    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"423","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0428 18:09:15.080047    6096 pod_ready.go:92] pod "etcd-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:09:15.080104    6096 pod_ready.go:81] duration metric: took 8.2975ms for pod "etcd-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:09:15.080104    6096 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:09:15.080171    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-788600
	I0428 18:09:15.080228    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:15.080228    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:15.080284    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:15.082453    6096 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:09:15.082453    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:15.082453    6096 round_trippers.go:580]     Audit-Id: 60e171a2-cb27-44ed-83d5-68ada7742e86
	I0428 18:09:15.083073    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:15.083073    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:15.083073    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:15.083073    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:15.083073    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:15 GMT
	I0428 18:09:15.083145    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-788600","namespace":"kube-system","uid":"e5571b43-6397-459f-b12d-b3d7f5b95eb0","resourceVersion":"404","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.231.169:8443","kubernetes.io/config.hash":"5553c54a41b436754fc14166f7928d5c","kubernetes.io/config.mirror":"5553c54a41b436754fc14166f7928d5c","kubernetes.io/config.seen":"2024-04-29T01:08:48.885068633Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0428 18:09:15.083446    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:15.083446    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:15.083446    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:15.083446    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:15.086332    6096 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:09:15.086332    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:15.086332    6096 round_trippers.go:580]     Audit-Id: bfeab54c-813b-4b02-b8c7-34edd8bcc3db
	I0428 18:09:15.086332    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:15.086332    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:15.086332    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:15.086332    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:15.086332    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:15 GMT
	I0428 18:09:15.087032    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"423","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0428 18:09:15.087479    6096 pod_ready.go:92] pod "kube-apiserver-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:09:15.087479    6096 pod_ready.go:81] duration metric: took 7.3745ms for pod "kube-apiserver-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:09:15.087612    6096 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:09:15.087885    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:09:15.087885    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:15.087885    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:15.087885    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:15.090155    6096 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:09:15.090155    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:15.090554    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:15 GMT
	I0428 18:09:15.090554    6096 round_trippers.go:580]     Audit-Id: 5d0b6a96-e1ed-4085-8ea3-791cf4624b09
	I0428 18:09:15.090554    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:15.090554    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:15.090554    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:15.090554    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:15.090692    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"405","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0428 18:09:15.091777    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:15.091777    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:15.091777    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:15.091909    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:15.095108    6096 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:09:15.095108    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:15.095108    6096 round_trippers.go:580]     Audit-Id: e7dfb49f-2f09-42f2-b1cb-6238e316d637
	I0428 18:09:15.095108    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:15.095108    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:15.095108    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:15.095108    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:15.095108    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:15 GMT
	I0428 18:09:15.095518    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"423","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0428 18:09:15.095688    6096 pod_ready.go:92] pod "kube-controller-manager-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:09:15.095688    6096 pod_ready.go:81] duration metric: took 8.0765ms for pod "kube-controller-manager-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:09:15.095688    6096 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bkkql" in "kube-system" namespace to be "Ready" ...
	I0428 18:09:15.096224    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bkkql
	I0428 18:09:15.096289    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:15.096289    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:15.096289    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:15.098566    6096 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:09:15.098566    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:15.098566    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:15.098566    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:15 GMT
	I0428 18:09:15.099213    6096 round_trippers.go:580]     Audit-Id: 9d4f5f24-ee74-4473-963a-b8cdf96db9fe
	I0428 18:09:15.099213    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:15.099213    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:15.099213    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:15.099481    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bkkql","generateName":"kube-proxy-","namespace":"kube-system","uid":"eccd7725-151c-4770-b99c-cb308b31389c","resourceVersion":"397","creationTimestamp":"2024-04-29T01:09:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0428 18:09:15.101192    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:15.101192    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:15.101192    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:15.101192    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:15.107044    6096 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:09:15.107187    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:15.107187    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:15.107187    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:15 GMT
	I0428 18:09:15.107276    6096 round_trippers.go:580]     Audit-Id: e5925f26-c645-46a9-8afe-8397e8e0e304
	I0428 18:09:15.107276    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:15.107276    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:15.107307    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:15.107307    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"423","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0428 18:09:15.107848    6096 pod_ready.go:92] pod "kube-proxy-bkkql" in "kube-system" namespace has status "Ready":"True"
	I0428 18:09:15.107890    6096 pod_ready.go:81] duration metric: took 12.1592ms for pod "kube-proxy-bkkql" in "kube-system" namespace to be "Ready" ...
	I0428 18:09:15.107890    6096 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:09:15.266549    6096 request.go:629] Waited for 158.6172ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-788600
	I0428 18:09:15.266732    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-788600
	I0428 18:09:15.266732    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:15.266732    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:15.266732    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:15.270767    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:09:15.271735    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:15.271735    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:15.271735    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:15.271735    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:15 GMT
	I0428 18:09:15.271784    6096 round_trippers.go:580]     Audit-Id: d0afd806-6ede-4eff-aafd-afb7b1480819
	I0428 18:09:15.271784    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:15.271784    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:15.271962    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-788600","namespace":"kube-system","uid":"55bd2888-a3b6-498a-9352-8b15bcc5e545","resourceVersion":"403","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"efaaef68fa82e79d9434de065e5d40a6","kubernetes.io/config.mirror":"efaaef68fa82e79d9434de065e5d40a6","kubernetes.io/config.seen":"2024-04-29T01:08:48.885071033Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0428 18:09:15.468590    6096 request.go:629] Waited for 195.9642ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:15.468849    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:09:15.468849    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:15.468965    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:15.468965    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:15.472581    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:09:15.473345    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:15.473345    6096 round_trippers.go:580]     Audit-Id: 0ba72089-bc38-4cff-a182-472942c0e5a1
	I0428 18:09:15.473345    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:15.473345    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:15.473345    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:15.473345    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:15.473345    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:15 GMT
	I0428 18:09:15.473668    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"423","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0428 18:09:15.474133    6096 pod_ready.go:92] pod "kube-scheduler-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:09:15.474226    6096 pod_ready.go:81] duration metric: took 366.2939ms for pod "kube-scheduler-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:09:15.474226    6096 pod_ready.go:38] duration metric: took 2.9300225s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 18:09:15.474226    6096 api_server.go:52] waiting for apiserver process to appear ...
	I0428 18:09:15.487116    6096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:09:15.515743    6096 command_runner.go:130] > 2072
	I0428 18:09:15.515743    6096 api_server.go:72] duration metric: took 13.3648528s to wait for apiserver process to appear ...
	I0428 18:09:15.516003    6096 api_server.go:88] waiting for apiserver healthz status ...
	I0428 18:09:15.516129    6096 api_server.go:253] Checking apiserver healthz at https://172.27.231.169:8443/healthz ...
	I0428 18:09:15.523711    6096 api_server.go:279] https://172.27.231.169:8443/healthz returned 200:
	ok
	I0428 18:09:15.524169    6096 round_trippers.go:463] GET https://172.27.231.169:8443/version
	I0428 18:09:15.524216    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:15.524216    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:15.524216    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:15.526121    6096 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0428 18:09:15.526121    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:15.526248    6096 round_trippers.go:580]     Audit-Id: 81f1d169-a3a4-4748-b1b6-5244ea33b976
	I0428 18:09:15.526248    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:15.526248    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:15.526248    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:15.526248    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:15.526248    6096 round_trippers.go:580]     Content-Length: 263
	I0428 18:09:15.526248    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:15 GMT
	I0428 18:09:15.526248    6096 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0428 18:09:15.526248    6096 api_server.go:141] control plane version: v1.30.0
	I0428 18:09:15.526248    6096 api_server.go:131] duration metric: took 10.2447ms to wait for apiserver health ...
	I0428 18:09:15.526248    6096 system_pods.go:43] waiting for kube-system pods to appear ...
	I0428 18:09:15.672227    6096 request.go:629] Waited for 145.6589ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods
	I0428 18:09:15.672227    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods
	I0428 18:09:15.672227    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:15.672227    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:15.672227    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:15.678972    6096 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:09:15.679350    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:15.679350    6096 round_trippers.go:580]     Audit-Id: 6a0f1b59-4a31-4f62-b2e6-7f14096f5510
	I0428 18:09:15.679350    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:15.679350    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:15.679350    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:15.679350    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:15.679350    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:15 GMT
	I0428 18:09:15.681383    6096 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"442","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0428 18:09:15.683937    6096 system_pods.go:59] 8 kube-system pods found
	I0428 18:09:15.684008    6096 system_pods.go:61] "coredns-7db6d8ff4d-rp2lx" [d6f6f38d-f1f3-454e-a469-c76c8fbc5d99] Running
	I0428 18:09:15.684077    6096 system_pods.go:61] "etcd-multinode-788600" [9d0f8c4f-569f-4a80-8960-2210a5a24612] Running
	I0428 18:09:15.684077    6096 system_pods.go:61] "kindnet-52rrh" [49c6b5f0-286f-4bff-b719-d73a4ea4aaf3] Running
	I0428 18:09:15.684077    6096 system_pods.go:61] "kube-apiserver-multinode-788600" [e5571b43-6397-459f-b12d-b3d7f5b95eb0] Running
	I0428 18:09:15.684077    6096 system_pods.go:61] "kube-controller-manager-multinode-788600" [b7d7893e-bd95-4f96-879f-a8378040fc03] Running
	I0428 18:09:15.684077    6096 system_pods.go:61] "kube-proxy-bkkql" [eccd7725-151c-4770-b99c-cb308b31389c] Running
	I0428 18:09:15.684077    6096 system_pods.go:61] "kube-scheduler-multinode-788600" [55bd2888-a3b6-498a-9352-8b15bcc5e545] Running
	I0428 18:09:15.684077    6096 system_pods.go:61] "storage-provisioner" [04bc447a-c711-4c23-ad4b-db5fd32b28d2] Running
	I0428 18:09:15.684077    6096 system_pods.go:74] duration metric: took 157.8292ms to wait for pod list to return data ...
	I0428 18:09:15.684077    6096 default_sa.go:34] waiting for default service account to be created ...
	I0428 18:09:15.859688    6096 request.go:629] Waited for 175.2293ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.231.169:8443/api/v1/namespaces/default/serviceaccounts
	I0428 18:09:15.859919    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/default/serviceaccounts
	I0428 18:09:15.859919    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:15.859919    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:15.860030    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:15.864314    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:09:15.864314    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:15.864314    6096 round_trippers.go:580]     Audit-Id: f6dce771-cadc-4e21-ba8d-71077896e6c4
	I0428 18:09:15.864314    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:15.864789    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:15.864789    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:15.864789    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:15.864789    6096 round_trippers.go:580]     Content-Length: 261
	I0428 18:09:15.864789    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:15 GMT
	I0428 18:09:15.864789    6096 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"cd75ac33-a0a3-4b71-9266-aa10ab97a649","resourceVersion":"328","creationTimestamp":"2024-04-29T01:09:02Z"}}]}
	I0428 18:09:15.865208    6096 default_sa.go:45] found service account: "default"
	I0428 18:09:15.865319    6096 default_sa.go:55] duration metric: took 181.1308ms for default service account to be created ...
	I0428 18:09:15.865460    6096 system_pods.go:116] waiting for k8s-apps to be running ...
	I0428 18:09:16.062736    6096 request.go:629] Waited for 196.9617ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods
	I0428 18:09:16.062858    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods
	I0428 18:09:16.062858    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:16.062858    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:16.062858    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:16.069291    6096 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:09:16.069291    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:16.069829    6096 round_trippers.go:580]     Audit-Id: d6ba3198-d342-4cd5-8ede-6a9b8367a97d
	I0428 18:09:16.069829    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:16.069829    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:16.069829    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:16.069829    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:16.069829    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:16 GMT
	I0428 18:09:16.071169    6096 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"442","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0428 18:09:16.073757    6096 system_pods.go:86] 8 kube-system pods found
	I0428 18:09:16.073757    6096 system_pods.go:89] "coredns-7db6d8ff4d-rp2lx" [d6f6f38d-f1f3-454e-a469-c76c8fbc5d99] Running
	I0428 18:09:16.073757    6096 system_pods.go:89] "etcd-multinode-788600" [9d0f8c4f-569f-4a80-8960-2210a5a24612] Running
	I0428 18:09:16.073757    6096 system_pods.go:89] "kindnet-52rrh" [49c6b5f0-286f-4bff-b719-d73a4ea4aaf3] Running
	I0428 18:09:16.073757    6096 system_pods.go:89] "kube-apiserver-multinode-788600" [e5571b43-6397-459f-b12d-b3d7f5b95eb0] Running
	I0428 18:09:16.073757    6096 system_pods.go:89] "kube-controller-manager-multinode-788600" [b7d7893e-bd95-4f96-879f-a8378040fc03] Running
	I0428 18:09:16.073757    6096 system_pods.go:89] "kube-proxy-bkkql" [eccd7725-151c-4770-b99c-cb308b31389c] Running
	I0428 18:09:16.073757    6096 system_pods.go:89] "kube-scheduler-multinode-788600" [55bd2888-a3b6-498a-9352-8b15bcc5e545] Running
	I0428 18:09:16.073757    6096 system_pods.go:89] "storage-provisioner" [04bc447a-c711-4c23-ad4b-db5fd32b28d2] Running
	I0428 18:09:16.073757    6096 system_pods.go:126] duration metric: took 208.296ms to wait for k8s-apps to be running ...
	I0428 18:09:16.074285    6096 system_svc.go:44] waiting for kubelet service to be running ....
	I0428 18:09:16.088530    6096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 18:09:16.111639    6096 system_svc.go:56] duration metric: took 37.8818ms WaitForService to wait for kubelet
	I0428 18:09:16.112303    6096 kubeadm.go:576] duration metric: took 13.9614118s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 18:09:16.112351    6096 node_conditions.go:102] verifying NodePressure condition ...
	I0428 18:09:16.266139    6096 request.go:629] Waited for 153.7874ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.231.169:8443/api/v1/nodes
	I0428 18:09:16.266420    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes
	I0428 18:09:16.266420    6096 round_trippers.go:469] Request Headers:
	I0428 18:09:16.266420    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:09:16.266420    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:09:16.272690    6096 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:09:16.272802    6096 round_trippers.go:577] Response Headers:
	I0428 18:09:16.272802    6096 round_trippers.go:580]     Audit-Id: fd6f46e4-3cf6-44ba-bdc8-759d43602dbf
	I0428 18:09:16.272852    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:09:16.272852    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:09:16.272852    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:09:16.272880    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:09:16.272880    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:09:16 GMT
	I0428 18:09:16.272880    6096 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"423","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0428 18:09:16.273892    6096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:09:16.273892    6096 node_conditions.go:123] node cpu capacity is 2
	I0428 18:09:16.273892    6096 node_conditions.go:105] duration metric: took 161.5401ms to run NodePressure ...
	I0428 18:09:16.273892    6096 start.go:240] waiting for startup goroutines ...
	I0428 18:09:16.273892    6096 start.go:245] waiting for cluster config update ...
	I0428 18:09:16.273892    6096 start.go:254] writing updated cluster config ...
	I0428 18:09:16.282362    6096 out.go:177] 
	I0428 18:09:16.289278    6096 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:09:16.290290    6096 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:09:16.296321    6096 out.go:177] * Starting "multinode-788600-m02" worker node in "multinode-788600" cluster
	I0428 18:09:16.298530    6096 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 18:09:16.298530    6096 cache.go:56] Caching tarball of preloaded images
	I0428 18:09:16.299070    6096 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 18:09:16.299238    6096 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 18:09:16.299408    6096 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:09:16.304888    6096 start.go:360] acquireMachinesLock for multinode-788600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 18:09:16.304971    6096 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-788600-m02"
	I0428 18:09:16.304971    6096 start.go:93] Provisioning new machine with config: &{Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.231.169 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0428 18:09:16.304971    6096 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0428 18:09:16.309524    6096 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 18:09:16.309524    6096 start.go:159] libmachine.API.Create for "multinode-788600" (driver="hyperv")
	I0428 18:09:16.311781    6096 client.go:168] LocalClient.Create starting
	I0428 18:09:16.311940    6096 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0428 18:09:16.311940    6096 main.go:141] libmachine: Decoding PEM data...
	I0428 18:09:16.311940    6096 main.go:141] libmachine: Parsing certificate...
	I0428 18:09:16.311940    6096 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0428 18:09:16.312963    6096 main.go:141] libmachine: Decoding PEM data...
	I0428 18:09:16.312963    6096 main.go:141] libmachine: Parsing certificate...
	I0428 18:09:16.312963    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0428 18:09:18.181397    6096 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0428 18:09:18.181397    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:18.181397    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0428 18:09:19.919176    6096 main.go:141] libmachine: [stdout =====>] : False
	
	I0428 18:09:19.919176    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:19.919176    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 18:09:21.392574    6096 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 18:09:21.392574    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:21.392574    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 18:09:24.872900    6096 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 18:09:24.873134    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:24.875070    6096 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 18:09:25.357374    6096 main.go:141] libmachine: Creating SSH key...
	I0428 18:09:25.552371    6096 main.go:141] libmachine: Creating VM...
	I0428 18:09:25.552371    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0428 18:09:28.404848    6096 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0428 18:09:28.404848    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:28.405274    6096 main.go:141] libmachine: Using switch "Default Switch"
	I0428 18:09:28.405342    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0428 18:09:30.151716    6096 main.go:141] libmachine: [stdout =====>] : True
	
	I0428 18:09:30.151716    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:30.151716    6096 main.go:141] libmachine: Creating VHD
	I0428 18:09:30.151883    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0428 18:09:33.751590    6096 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4FE876A5-F978-48AE-803E-F10913235C78
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0428 18:09:33.751672    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:33.751672    6096 main.go:141] libmachine: Writing magic tar header
	I0428 18:09:33.751811    6096 main.go:141] libmachine: Writing SSH key tar header
	I0428 18:09:33.761706    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0428 18:09:36.808060    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:09:36.808952    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:36.809035    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\disk.vhd' -SizeBytes 20000MB
	I0428 18:09:39.234150    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:09:39.235168    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:39.235330    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-788600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0428 18:09:42.792400    6096 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-788600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0428 18:09:42.792476    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:42.792476    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-788600-m02 -DynamicMemoryEnabled $false
	I0428 18:09:44.943537    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:09:44.943734    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:44.943734    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-788600-m02 -Count 2
	I0428 18:09:47.025638    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:09:47.025756    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:47.025868    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-788600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\boot2docker.iso'
	I0428 18:09:49.514756    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:09:49.514756    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:49.515383    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-788600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\disk.vhd'
	I0428 18:09:52.059072    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:09:52.059183    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:52.059183    6096 main.go:141] libmachine: Starting VM...
	I0428 18:09:52.059286    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-788600-m02
	I0428 18:09:55.013171    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:09:55.013171    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:55.013710    6096 main.go:141] libmachine: Waiting for host to start...
	I0428 18:09:55.013807    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:09:57.172814    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:09:57.173802    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:09:57.173802    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:09:59.633011    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:09:59.633076    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:00.637172    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:10:02.718342    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:10:02.718342    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:02.718342    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:10:05.207796    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:10:05.207796    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:06.222179    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:10:08.312692    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:10:08.312760    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:08.312802    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:10:10.723232    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:10:10.723232    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:11.737539    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:10:13.863780    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:10:13.864404    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:13.864523    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:10:16.359469    6096 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:10:16.359803    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:17.372367    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:10:19.510642    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:10:19.510694    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:19.510795    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:10:22.064313    6096 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:10:22.064449    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:22.064539    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:10:24.076750    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:10:24.077402    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:24.077402    6096 machine.go:94] provisionDockerMachine start ...
	I0428 18:10:24.077459    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:10:26.171972    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:10:26.172237    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:26.172237    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:10:28.628918    6096 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:10:28.629727    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:28.636096    6096 main.go:141] libmachine: Using SSH client type: native
	I0428 18:10:28.646799    6096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.230.221 22 <nil> <nil>}
	I0428 18:10:28.646799    6096 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 18:10:28.780646    6096 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 18:10:28.780646    6096 buildroot.go:166] provisioning hostname "multinode-788600-m02"
	I0428 18:10:28.780646    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:10:30.855393    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:10:30.855393    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:30.855547    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:10:33.419757    6096 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:10:33.419877    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:33.425912    6096 main.go:141] libmachine: Using SSH client type: native
	I0428 18:10:33.426499    6096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.230.221 22 <nil> <nil>}
	I0428 18:10:33.426703    6096 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-788600-m02 && echo "multinode-788600-m02" | sudo tee /etc/hostname
	I0428 18:10:33.578336    6096 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-788600-m02
	
	I0428 18:10:33.578526    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:10:35.623282    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:10:35.623282    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:35.623867    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:10:38.108846    6096 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:10:38.108909    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:38.114671    6096 main.go:141] libmachine: Using SSH client type: native
	I0428 18:10:38.114671    6096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.230.221 22 <nil> <nil>}
	I0428 18:10:38.115206    6096 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-788600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-788600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-788600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 18:10:38.251106    6096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 18:10:38.251201    6096 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 18:10:38.251201    6096 buildroot.go:174] setting up certificates
	I0428 18:10:38.251201    6096 provision.go:84] configureAuth start
	I0428 18:10:38.251201    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:10:40.288050    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:10:40.288050    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:40.288211    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:10:42.798689    6096 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:10:42.798689    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:42.798974    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:10:44.878689    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:10:44.878689    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:44.878689    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:10:47.369169    6096 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:10:47.369169    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:47.369875    6096 provision.go:143] copyHostCerts
	I0428 18:10:47.370059    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 18:10:47.370383    6096 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 18:10:47.370383    6096 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 18:10:47.370581    6096 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 18:10:47.371348    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 18:10:47.372197    6096 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 18:10:47.372285    6096 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 18:10:47.372844    6096 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 18:10:47.374468    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 18:10:47.374580    6096 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 18:10:47.374580    6096 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 18:10:47.374580    6096 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 18:10:47.376569    6096 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-788600-m02 san=[127.0.0.1 172.27.230.221 localhost minikube multinode-788600-m02]
	I0428 18:10:47.788866    6096 provision.go:177] copyRemoteCerts
	I0428 18:10:47.808729    6096 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 18:10:47.808729    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:10:49.885768    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:10:49.885842    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:49.885908    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:10:52.352681    6096 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:10:52.352681    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:52.353464    6096 sshutil.go:53] new ssh client: &{IP:172.27.230.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:10:52.461900    6096 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6531624s)
	I0428 18:10:52.461900    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 18:10:52.462915    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 18:10:52.509326    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 18:10:52.509714    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0428 18:10:52.555657    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 18:10:52.556448    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 18:10:52.608978    6096 provision.go:87] duration metric: took 14.3577495s to configureAuth
	I0428 18:10:52.608978    6096 buildroot.go:189] setting minikube options for container-runtime
	I0428 18:10:52.610453    6096 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:10:52.610550    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:10:54.636287    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:10:54.636416    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:54.636536    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:10:57.115609    6096 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:10:57.115609    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:57.121973    6096 main.go:141] libmachine: Using SSH client type: native
	I0428 18:10:57.121973    6096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.230.221 22 <nil> <nil>}
	I0428 18:10:57.121973    6096 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 18:10:57.251739    6096 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 18:10:57.251739    6096 buildroot.go:70] root file system type: tmpfs
	I0428 18:10:57.251994    6096 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 18:10:57.252068    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:10:59.286981    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:10:59.287960    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:10:59.288057    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:11:01.768181    6096 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:11:01.768181    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:01.773762    6096 main.go:141] libmachine: Using SSH client type: native
	I0428 18:11:01.774640    6096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.230.221 22 <nil> <nil>}
	I0428 18:11:01.774640    6096 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.231.169"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 18:11:01.940232    6096 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.231.169
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 18:11:01.940364    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:11:03.955722    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:11:03.955722    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:03.955953    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:11:06.447933    6096 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:11:06.447999    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:06.454600    6096 main.go:141] libmachine: Using SSH client type: native
	I0428 18:11:06.454600    6096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.230.221 22 <nil> <nil>}
	I0428 18:11:06.454600    6096 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 18:11:08.645140    6096 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 18:11:08.645275    6096 machine.go:97] duration metric: took 44.5677881s to provisionDockerMachine
	I0428 18:11:08.645275    6096 client.go:171] duration metric: took 1m52.3332851s to LocalClient.Create
	I0428 18:11:08.645275    6096 start.go:167] duration metric: took 1m52.3355422s to libmachine.API.Create "multinode-788600"
	I0428 18:11:08.645390    6096 start.go:293] postStartSetup for "multinode-788600-m02" (driver="hyperv")
	I0428 18:11:08.645390    6096 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 18:11:08.658143    6096 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 18:11:08.658143    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:11:10.702793    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:11:10.703551    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:10.703605    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:11:13.255424    6096 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:11:13.255486    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:13.255668    6096 sshutil.go:53] new ssh client: &{IP:172.27.230.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:11:13.367072    6096 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7089195s)
	I0428 18:11:13.379606    6096 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 18:11:13.386673    6096 command_runner.go:130] > NAME=Buildroot
	I0428 18:11:13.386673    6096 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0428 18:11:13.386673    6096 command_runner.go:130] > ID=buildroot
	I0428 18:11:13.386673    6096 command_runner.go:130] > VERSION_ID=2023.02.9
	I0428 18:11:13.386673    6096 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0428 18:11:13.386797    6096 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 18:11:13.386797    6096 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 18:11:13.387339    6096 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 18:11:13.388046    6096 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 18:11:13.388046    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 18:11:13.401074    6096 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 18:11:13.424239    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 18:11:13.476752    6096 start.go:296] duration metric: took 4.8313521s for postStartSetup
	I0428 18:11:13.480071    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:11:15.534455    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:11:15.534541    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:15.534607    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:11:18.074719    6096 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:11:18.074800    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:18.074800    6096 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:11:18.077799    6096 start.go:128] duration metric: took 2m1.7725999s to createHost
	I0428 18:11:18.077799    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:11:20.243492    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:11:20.243492    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:20.243492    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:11:22.730709    6096 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:11:22.730709    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:22.738480    6096 main.go:141] libmachine: Using SSH client type: native
	I0428 18:11:22.738968    6096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.230.221 22 <nil> <nil>}
	I0428 18:11:22.739070    6096 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 18:11:22.864086    6096 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714353082.868831107
	
	I0428 18:11:22.864208    6096 fix.go:216] guest clock: 1714353082.868831107
	I0428 18:11:22.864208    6096 fix.go:229] Guest: 2024-04-28 18:11:22.868831107 -0700 PDT Remote: 2024-04-28 18:11:18.0777996 -0700 PDT m=+330.521224501 (delta=4.791031507s)
	I0428 18:11:22.864208    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:11:24.953773    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:11:24.953773    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:24.953773    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:11:27.442174    6096 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:11:27.442174    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:27.447784    6096 main.go:141] libmachine: Using SSH client type: native
	I0428 18:11:27.448286    6096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.230.221 22 <nil> <nil>}
	I0428 18:11:27.448358    6096 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714353082
	I0428 18:11:27.594079    6096 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 01:11:22 UTC 2024
	
	I0428 18:11:27.594079    6096 fix.go:236] clock set: Mon Apr 29 01:11:22 UTC 2024
	 (err=<nil>)
	I0428 18:11:27.594079    6096 start.go:83] releasing machines lock for "multinode-788600-m02", held for 2m11.2888605s
	I0428 18:11:27.594335    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:11:29.634522    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:11:29.634522    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:29.634799    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:11:32.068576    6096 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:11:32.068815    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:32.071704    6096 out.go:177] * Found network options:
	I0428 18:11:32.074639    6096 out.go:177]   - NO_PROXY=172.27.231.169
	W0428 18:11:32.077222    6096 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 18:11:32.078814    6096 out.go:177]   - NO_PROXY=172.27.231.169
	W0428 18:11:32.082430    6096 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 18:11:32.083758    6096 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 18:11:32.086219    6096 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 18:11:32.086859    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:11:32.098565    6096 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 18:11:32.099598    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:11:34.198444    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:11:34.198444    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:34.198444    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:11:34.204273    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:11:34.204444    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:34.204444    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:11:36.785631    6096 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:11:36.785721    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:36.785994    6096 sshutil.go:53] new ssh client: &{IP:172.27.230.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:11:36.808512    6096 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:11:36.808512    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:36.808728    6096 sshutil.go:53] new ssh client: &{IP:172.27.230.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:11:36.872547    6096 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0428 18:11:36.873457    6096 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7748822s)
	W0428 18:11:36.873547    6096 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 18:11:36.885433    6096 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 18:11:36.986566    6096 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0428 18:11:36.987452    6096 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0428 18:11:36.987452    6096 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 18:11:36.987452    6096 start.go:494] detecting cgroup driver to use...
	I0428 18:11:36.987452    6096 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9004891s)
	I0428 18:11:36.987660    6096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 18:11:37.025298    6096 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0428 18:11:37.037935    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 18:11:37.076520    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 18:11:37.096367    6096 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 18:11:37.108154    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 18:11:37.143335    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 18:11:37.176167    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 18:11:37.210868    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 18:11:37.243756    6096 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 18:11:37.277718    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 18:11:37.310440    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 18:11:37.347072    6096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 18:11:37.385389    6096 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 18:11:37.406757    6096 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0428 18:11:37.422068    6096 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 18:11:37.451093    6096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:11:37.639596    6096 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 18:11:37.671341    6096 start.go:494] detecting cgroup driver to use...
	I0428 18:11:37.686867    6096 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 18:11:37.711047    6096 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0428 18:11:37.711047    6096 command_runner.go:130] > [Unit]
	I0428 18:11:37.711047    6096 command_runner.go:130] > Description=Docker Application Container Engine
	I0428 18:11:37.711047    6096 command_runner.go:130] > Documentation=https://docs.docker.com
	I0428 18:11:37.711047    6096 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0428 18:11:37.711047    6096 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0428 18:11:37.711047    6096 command_runner.go:130] > StartLimitBurst=3
	I0428 18:11:37.711047    6096 command_runner.go:130] > StartLimitIntervalSec=60
	I0428 18:11:37.711047    6096 command_runner.go:130] > [Service]
	I0428 18:11:37.711047    6096 command_runner.go:130] > Type=notify
	I0428 18:11:37.711047    6096 command_runner.go:130] > Restart=on-failure
	I0428 18:11:37.711047    6096 command_runner.go:130] > Environment=NO_PROXY=172.27.231.169
	I0428 18:11:37.711047    6096 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0428 18:11:37.711047    6096 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0428 18:11:37.711047    6096 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0428 18:11:37.711047    6096 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0428 18:11:37.711047    6096 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0428 18:11:37.711047    6096 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0428 18:11:37.711047    6096 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0428 18:11:37.711047    6096 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0428 18:11:37.711047    6096 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0428 18:11:37.711047    6096 command_runner.go:130] > ExecStart=
	I0428 18:11:37.711047    6096 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0428 18:11:37.711047    6096 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0428 18:11:37.711047    6096 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0428 18:11:37.711047    6096 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0428 18:11:37.711047    6096 command_runner.go:130] > LimitNOFILE=infinity
	I0428 18:11:37.711047    6096 command_runner.go:130] > LimitNPROC=infinity
	I0428 18:11:37.711047    6096 command_runner.go:130] > LimitCORE=infinity
	I0428 18:11:37.711047    6096 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0428 18:11:37.711047    6096 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0428 18:11:37.711047    6096 command_runner.go:130] > TasksMax=infinity
	I0428 18:11:37.711047    6096 command_runner.go:130] > TimeoutStartSec=0
	I0428 18:11:37.711047    6096 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0428 18:11:37.711047    6096 command_runner.go:130] > Delegate=yes
	I0428 18:11:37.711047    6096 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0428 18:11:37.711047    6096 command_runner.go:130] > KillMode=process
	I0428 18:11:37.711047    6096 command_runner.go:130] > [Install]
	I0428 18:11:37.711047    6096 command_runner.go:130] > WantedBy=multi-user.target
	I0428 18:11:37.725196    6096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 18:11:37.759997    6096 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 18:11:37.809652    6096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 18:11:37.849695    6096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 18:11:37.887226    6096 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 18:11:37.952163    6096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 18:11:37.976818    6096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 18:11:38.012144    6096 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0428 18:11:38.027146    6096 ssh_runner.go:195] Run: which cri-dockerd
	I0428 18:11:38.034695    6096 command_runner.go:130] > /usr/bin/cri-dockerd
	I0428 18:11:38.048692    6096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 18:11:38.068659    6096 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 18:11:38.116584    6096 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 18:11:38.316014    6096 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 18:11:38.504225    6096 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 18:11:38.504340    6096 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 18:11:38.549070    6096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:11:38.748345    6096 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 18:11:41.290967    6096 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5426176s)
	I0428 18:11:41.304079    6096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0428 18:11:41.343369    6096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 18:11:41.383486    6096 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0428 18:11:41.578404    6096 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0428 18:11:41.775157    6096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:11:41.973423    6096 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0428 18:11:42.014508    6096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 18:11:42.050013    6096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:11:42.265608    6096 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0428 18:11:42.380716    6096 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0428 18:11:42.394307    6096 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0428 18:11:42.403411    6096 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0428 18:11:42.403411    6096 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0428 18:11:42.403411    6096 command_runner.go:130] > Device: 0,22	Inode: 875         Links: 1
	I0428 18:11:42.403411    6096 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0428 18:11:42.403411    6096 command_runner.go:130] > Access: 2024-04-29 01:11:42.293658446 +0000
	I0428 18:11:42.403411    6096 command_runner.go:130] > Modify: 2024-04-29 01:11:42.293658446 +0000
	I0428 18:11:42.403411    6096 command_runner.go:130] > Change: 2024-04-29 01:11:42.297658446 +0000
	I0428 18:11:42.403411    6096 command_runner.go:130] >  Birth: -
	I0428 18:11:42.403411    6096 start.go:562] Will wait 60s for crictl version
	I0428 18:11:42.416694    6096 ssh_runner.go:195] Run: which crictl
	I0428 18:11:42.422314    6096 command_runner.go:130] > /usr/bin/crictl
	I0428 18:11:42.435828    6096 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 18:11:42.491900    6096 command_runner.go:130] > Version:  0.1.0
	I0428 18:11:42.491900    6096 command_runner.go:130] > RuntimeName:  docker
	I0428 18:11:42.491900    6096 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0428 18:11:42.491900    6096 command_runner.go:130] > RuntimeApiVersion:  v1
	I0428 18:11:42.491900    6096 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0428 18:11:42.501951    6096 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 18:11:42.537050    6096 command_runner.go:130] > 26.0.2
	I0428 18:11:42.548452    6096 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 18:11:42.583522    6096 command_runner.go:130] > 26.0.2
	I0428 18:11:42.586714    6096 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0428 18:11:42.590218    6096 out.go:177]   - env NO_PROXY=172.27.231.169
	I0428 18:11:42.592637    6096 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0428 18:11:42.597441    6096 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0428 18:11:42.597441    6096 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0428 18:11:42.597441    6096 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0428 18:11:42.597441    6096 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d9:46:b5 Flags:up|broadcast|multicast|running}
	I0428 18:11:42.600482    6096 ip.go:210] interface addr: fe80::ddcc:c7f1:f829:ae2f/64
	I0428 18:11:42.600482    6096 ip.go:210] interface addr: 172.27.224.1/20
	I0428 18:11:42.613382    6096 ssh_runner.go:195] Run: grep 172.27.224.1	host.minikube.internal$ /etc/hosts
	I0428 18:11:42.620562    6096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 18:11:42.641571    6096 mustload.go:65] Loading cluster: multinode-788600
	I0428 18:11:42.642432    6096 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:11:42.643552    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:11:44.659208    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:11:44.659208    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:44.659208    6096 host.go:66] Checking if "multinode-788600" exists ...
	I0428 18:11:44.660943    6096 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600 for IP: 172.27.230.221
	I0428 18:11:44.660943    6096 certs.go:194] generating shared ca certs ...
	I0428 18:11:44.660943    6096 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:11:44.661924    6096 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0428 18:11:44.662495    6096 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0428 18:11:44.662821    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 18:11:44.663080    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0428 18:11:44.663391    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 18:11:44.663608    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 18:11:44.664292    6096 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem (1338 bytes)
	W0428 18:11:44.664755    6096 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228_empty.pem, impossibly tiny 0 bytes
	I0428 18:11:44.664859    6096 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0428 18:11:44.665172    6096 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0428 18:11:44.665584    6096 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0428 18:11:44.665892    6096 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0428 18:11:44.666554    6096 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem (1708 bytes)
	I0428 18:11:44.666554    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /usr/share/ca-certificates/32282.pem
	I0428 18:11:44.666554    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:11:44.667103    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem -> /usr/share/ca-certificates/3228.pem
	I0428 18:11:44.667295    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 18:11:44.718102    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0428 18:11:44.767071    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 18:11:44.811439    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 18:11:44.856890    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /usr/share/ca-certificates/32282.pem (1708 bytes)
	I0428 18:11:44.902265    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 18:11:44.948348    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem --> /usr/share/ca-certificates/3228.pem (1338 bytes)
	I0428 18:11:45.006474    6096 ssh_runner.go:195] Run: openssl version
	I0428 18:11:45.015196    6096 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0428 18:11:45.028118    6096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32282.pem && ln -fs /usr/share/ca-certificates/32282.pem /etc/ssl/certs/32282.pem"
	I0428 18:11:45.058778    6096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32282.pem
	I0428 18:11:45.067445    6096 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 18:11:45.067445    6096 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 18:11:45.079739    6096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32282.pem
	I0428 18:11:45.088443    6096 command_runner.go:130] > 3ec20f2e
	I0428 18:11:45.100299    6096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32282.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 18:11:45.131722    6096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 18:11:45.162780    6096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:11:45.169730    6096 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:11:45.169730    6096 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:11:45.181892    6096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:11:45.191311    6096 command_runner.go:130] > b5213941
	I0428 18:11:45.204754    6096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 18:11:45.243224    6096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3228.pem && ln -fs /usr/share/ca-certificates/3228.pem /etc/ssl/certs/3228.pem"
	I0428 18:11:45.277474    6096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3228.pem
	I0428 18:11:45.285337    6096 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 18:11:45.285337    6096 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 18:11:45.297547    6096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3228.pem
	I0428 18:11:45.306323    6096 command_runner.go:130] > 51391683
	I0428 18:11:45.318161    6096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3228.pem /etc/ssl/certs/51391683.0"
	I0428 18:11:45.355096    6096 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 18:11:45.360579    6096 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 18:11:45.361660    6096 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 18:11:45.361660    6096 kubeadm.go:928] updating node {m02 172.27.230.221 8443 v1.30.0 docker false true} ...
	I0428 18:11:45.361660    6096 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-788600-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.230.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 18:11:45.374538    6096 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 18:11:45.391719    6096 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	I0428 18:11:45.391719    6096 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0428 18:11:45.404643    6096 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0428 18:11:45.424496    6096 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0428 18:11:45.424496    6096 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0428 18:11:45.424496    6096 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0428 18:11:45.425024    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0428 18:11:45.425146    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0428 18:11:45.440933    6096 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0428 18:11:45.440933    6096 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0428 18:11:45.442115    6096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 18:11:45.448121    6096 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0428 18:11:45.448121    6096 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0428 18:11:45.448121    6096 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0428 18:11:45.448121    6096 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0428 18:11:45.448121    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0428 18:11:45.448121    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0428 18:11:45.506729    6096 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0428 18:11:45.520388    6096 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0428 18:11:45.600457    6096 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0428 18:11:45.610817    6096 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0428 18:11:45.611042    6096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0428 18:11:46.812185    6096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0428 18:11:46.829861    6096 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0428 18:11:46.861432    6096 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 18:11:46.906084    6096 ssh_runner.go:195] Run: grep 172.27.231.169	control-plane.minikube.internal$ /etc/hosts
	I0428 18:11:46.913238    6096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.231.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 18:11:46.947149    6096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:11:47.177439    6096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 18:11:47.210324    6096 host.go:66] Checking if "multinode-788600" exists ...
	I0428 18:11:47.211074    6096 start.go:316] joinCluster: &{Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.231.169 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.230.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 18:11:47.211304    6096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0428 18:11:47.211405    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:11:49.318755    6096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:11:49.318837    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:49.318917    6096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:11:51.785870    6096 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:11:51.786576    6096 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:11:51.786576    6096 sshutil.go:53] new ssh client: &{IP:172.27.231.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:11:51.997732    6096 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token vq45ld.v56hiyvbnrjzkh0h --discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c 
	I0428 18:11:51.997732    6096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7864191s)
	I0428 18:11:51.997732    6096 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.27.230.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0428 18:11:51.997732    6096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vq45ld.v56hiyvbnrjzkh0h --discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-788600-m02"
	I0428 18:11:52.212209    6096 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0428 18:11:53.526816    6096 command_runner.go:130] > [preflight] Running pre-flight checks
	I0428 18:11:53.526816    6096 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0428 18:11:53.526902    6096 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0428 18:11:53.526902    6096 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 18:11:53.526902    6096 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 18:11:53.526902    6096 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0428 18:11:53.526902    6096 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0428 18:11:53.526902    6096 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002692002s
	I0428 18:11:53.526902    6096 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0428 18:11:53.526902    6096 command_runner.go:130] > This node has joined the cluster:
	I0428 18:11:53.526902    6096 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0428 18:11:53.527012    6096 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0428 18:11:53.527012    6096 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0428 18:11:53.527012    6096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vq45ld.v56hiyvbnrjzkh0h --discovery-token-ca-cert-hash sha256:5f5d7b85b077f2e288e75e39a41f7a8d9853e7a56e67af1c968077db82f54b2c --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-788600-m02": (1.5292764s)
	I0428 18:11:53.527105    6096 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0428 18:11:53.742741    6096 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0428 18:11:53.940324    6096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-788600-m02 minikube.k8s.io/updated_at=2024_04_28T18_11_53_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=multinode-788600 minikube.k8s.io/primary=false
	I0428 18:11:54.079559    6096 command_runner.go:130] > node/multinode-788600-m02 labeled
	I0428 18:11:54.079559    6096 start.go:318] duration metric: took 6.8684714s to joinCluster
	I0428 18:11:54.079559    6096 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.27.230.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0428 18:11:54.084910    6096 out.go:177] * Verifying Kubernetes components...
	I0428 18:11:54.080348    6096 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:11:54.099540    6096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:11:54.317156    6096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 18:11:54.347080    6096 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:11:54.347881    6096 kapi.go:59] client config for multinode-788600: &rest.Config{Host:"https://172.27.231.169:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-788600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-788600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 18:11:54.348876    6096 node_ready.go:35] waiting up to 6m0s for node "multinode-788600-m02" to be "Ready" ...
	I0428 18:11:54.349112    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:11:54.349187    6096 round_trippers.go:469] Request Headers:
	I0428 18:11:54.349187    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:11:54.349187    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:11:54.362483    6096 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0428 18:11:54.362700    6096 round_trippers.go:577] Response Headers:
	I0428 18:11:54.362700    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:11:54 GMT
	I0428 18:11:54.362700    6096 round_trippers.go:580]     Audit-Id: 8b7484e9-a24d-4888-a516-48d5fc706489
	I0428 18:11:54.362700    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:11:54.362700    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:11:54.362700    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:11:54.362700    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:11:54.362700    6096 round_trippers.go:580]     Content-Length: 3921
	I0428 18:11:54.362766    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"603","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0428 18:11:54.849766    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:11:54.849766    6096 round_trippers.go:469] Request Headers:
	I0428 18:11:54.849766    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:11:54.849766    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:11:54.853037    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:11:54.853952    6096 round_trippers.go:577] Response Headers:
	I0428 18:11:54.853952    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:11:54.853952    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:11:54.853952    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:11:54.853952    6096 round_trippers.go:580]     Content-Length: 3921
	I0428 18:11:54.853952    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:11:54 GMT
	I0428 18:11:54.853952    6096 round_trippers.go:580]     Audit-Id: 7af32acc-5461-4baf-bf35-f4af1372fcbc
	I0428 18:11:54.853952    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:11:54.854067    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"603","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0428 18:11:55.362638    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:11:55.362960    6096 round_trippers.go:469] Request Headers:
	I0428 18:11:55.362960    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:11:55.362960    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:11:55.368213    6096 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:11:55.368279    6096 round_trippers.go:577] Response Headers:
	I0428 18:11:55.368279    6096 round_trippers.go:580]     Audit-Id: f1beaff1-0c1e-4e84-87c5-1fb5c3c91f07
	I0428 18:11:55.368279    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:11:55.368279    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:11:55.368279    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:11:55.368279    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:11:55.368279    6096 round_trippers.go:580]     Content-Length: 3921
	I0428 18:11:55.368279    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:11:55 GMT
	I0428 18:11:55.368513    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"603","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0428 18:11:55.863900    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:11:55.864119    6096 round_trippers.go:469] Request Headers:
	I0428 18:11:55.864119    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:11:55.864119    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:11:55.870101    6096 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:11:55.870101    6096 round_trippers.go:577] Response Headers:
	I0428 18:11:55.870101    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:11:55.870101    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:11:55.870101    6096 round_trippers.go:580]     Content-Length: 3921
	I0428 18:11:55.870101    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:11:55 GMT
	I0428 18:11:55.870101    6096 round_trippers.go:580]     Audit-Id: ed97d0a1-df9c-46e1-9a92-3fc28bf936f2
	I0428 18:11:55.870101    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:11:55.870101    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:11:55.870692    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"603","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0428 18:11:56.363276    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:11:56.363276    6096 round_trippers.go:469] Request Headers:
	I0428 18:11:56.363276    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:11:56.363276    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:11:56.370855    6096 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:11:56.370900    6096 round_trippers.go:577] Response Headers:
	I0428 18:11:56.370955    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:11:56.370955    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:11:56.370955    6096 round_trippers.go:580]     Content-Length: 3921
	I0428 18:11:56.370955    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:11:56 GMT
	I0428 18:11:56.370997    6096 round_trippers.go:580]     Audit-Id: bdb564c8-4c2b-410b-932c-4b0cacb82921
	I0428 18:11:56.370997    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:11:56.370997    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:11:56.371172    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"603","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0428 18:11:56.371295    6096 node_ready.go:53] node "multinode-788600-m02" has status "Ready":"False"
	I0428 18:11:56.861684    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:11:56.861904    6096 round_trippers.go:469] Request Headers:
	I0428 18:11:56.861904    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:11:56.861904    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:11:56.867504    6096 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:11:56.867619    6096 round_trippers.go:577] Response Headers:
	I0428 18:11:56.867619    6096 round_trippers.go:580]     Audit-Id: fa52ae49-62a8-42e5-bb24-b17c4666c94c
	I0428 18:11:56.867619    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:11:56.867619    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:11:56.867619    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:11:56.867619    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:11:56.867619    6096 round_trippers.go:580]     Content-Length: 3921
	I0428 18:11:56.867619    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:11:56 GMT
	I0428 18:11:56.867952    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"603","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0428 18:11:57.364348    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:11:57.364491    6096 round_trippers.go:469] Request Headers:
	I0428 18:11:57.364491    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:11:57.364491    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:11:57.371827    6096 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0428 18:11:57.371827    6096 round_trippers.go:577] Response Headers:
	I0428 18:11:57.371987    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:11:57.371987    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:11:57.371987    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:11:57.371987    6096 round_trippers.go:580]     Content-Length: 4030
	I0428 18:11:57.372045    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:11:57 GMT
	I0428 18:11:57.372045    6096 round_trippers.go:580]     Audit-Id: dd1e1d4e-7d25-4aed-b820-6151b1dee75b
	I0428 18:11:57.372045    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:11:57.372045    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"609","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0428 18:11:57.861552    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:11:57.861645    6096 round_trippers.go:469] Request Headers:
	I0428 18:11:57.861645    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:11:57.861645    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:11:57.867915    6096 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:11:57.867915    6096 round_trippers.go:577] Response Headers:
	I0428 18:11:57.867915    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:11:57.867915    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:11:57.867915    6096 round_trippers.go:580]     Content-Length: 4030
	I0428 18:11:57.867915    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:11:57 GMT
	I0428 18:11:57.867915    6096 round_trippers.go:580]     Audit-Id: 8ef4443e-0478-4390-a750-2b6a65c669bf
	I0428 18:11:57.867915    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:11:57.867915    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:11:57.867915    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"609","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0428 18:11:58.362885    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:11:58.363123    6096 round_trippers.go:469] Request Headers:
	I0428 18:11:58.363123    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:11:58.363123    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:11:58.367596    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:11:58.367596    6096 round_trippers.go:577] Response Headers:
	I0428 18:11:58.367596    6096 round_trippers.go:580]     Audit-Id: a139053f-bac0-4f9d-82fd-cb55b69eed82
	I0428 18:11:58.367596    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:11:58.367596    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:11:58.367596    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:11:58.367596    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:11:58.368432    6096 round_trippers.go:580]     Content-Length: 4030
	I0428 18:11:58.368432    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:11:58 GMT
	I0428 18:11:58.368547    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"609","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0428 18:11:58.850657    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:11:58.850657    6096 round_trippers.go:469] Request Headers:
	I0428 18:11:58.850657    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:11:58.850657    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:11:58.855101    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:11:58.855101    6096 round_trippers.go:577] Response Headers:
	I0428 18:11:58.855775    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:11:58.855775    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:11:58.855775    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:11:58.855775    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:11:58.855775    6096 round_trippers.go:580]     Content-Length: 4030
	I0428 18:11:58.855775    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:11:58 GMT
	I0428 18:11:58.855775    6096 round_trippers.go:580]     Audit-Id: 948bab24-b1be-449b-ad99-3b9e4967fe46
	I0428 18:11:58.855995    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"609","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0428 18:11:58.856470    6096 node_ready.go:53] node "multinode-788600-m02" has status "Ready":"False"
	I0428 18:11:59.349434    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:11:59.349519    6096 round_trippers.go:469] Request Headers:
	I0428 18:11:59.349519    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:11:59.349519    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:11:59.362002    6096 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0428 18:11:59.362002    6096 round_trippers.go:577] Response Headers:
	I0428 18:11:59.362568    6096 round_trippers.go:580]     Content-Length: 4030
	I0428 18:11:59.362568    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:11:59 GMT
	I0428 18:11:59.362568    6096 round_trippers.go:580]     Audit-Id: 4c935af8-b5f4-4862-b49f-5d3a33b821b8
	I0428 18:11:59.362568    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:11:59.362568    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:11:59.362568    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:11:59.362568    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:11:59.362710    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"609","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0428 18:11:59.850496    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:11:59.850566    6096 round_trippers.go:469] Request Headers:
	I0428 18:11:59.850566    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:11:59.850566    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:11:59.857835    6096 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0428 18:11:59.857899    6096 round_trippers.go:577] Response Headers:
	I0428 18:11:59.857899    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:11:59.857899    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:11:59.857899    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:11:59.857899    6096 round_trippers.go:580]     Content-Length: 4030
	I0428 18:11:59.857899    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:11:59 GMT
	I0428 18:11:59.857899    6096 round_trippers.go:580]     Audit-Id: 97804580-8052-478f-9290-eb6ff0623f9d
	I0428 18:11:59.857899    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:11:59.857899    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"609","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0428 18:12:00.354739    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:00.354992    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:00.354992    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:00.354992    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:00.359161    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:12:00.359161    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:00.359161    6096 round_trippers.go:580]     Content-Length: 4030
	I0428 18:12:00.359161    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:00 GMT
	I0428 18:12:00.359445    6096 round_trippers.go:580]     Audit-Id: 1ac73bed-f776-4281-b276-38e59d142af4
	I0428 18:12:00.359445    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:00.359445    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:00.359445    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:00.359445    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:00.359544    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"609","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0428 18:12:00.863147    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:00.863221    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:00.863221    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:00.863221    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:00.867954    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:12:00.868010    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:00.868010    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:00.868010    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:00.868010    6096 round_trippers.go:580]     Content-Length: 4030
	I0428 18:12:00.868010    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:00 GMT
	I0428 18:12:00.868102    6096 round_trippers.go:580]     Audit-Id: e35bec2f-e2bc-4717-81e7-4ca9f42a641a
	I0428 18:12:00.868102    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:00.868102    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:00.868282    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"609","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0428 18:12:00.868711    6096 node_ready.go:53] node "multinode-788600-m02" has status "Ready":"False"
	I0428 18:12:01.363348    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:01.363348    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:01.363348    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:01.363348    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:01.368031    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:12:01.368031    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:01.368031    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:01.368031    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:01.368031    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:01.368031    6096 round_trippers.go:580]     Content-Length: 4030
	I0428 18:12:01.368031    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:01 GMT
	I0428 18:12:01.368031    6096 round_trippers.go:580]     Audit-Id: 91b53ab9-1340-44f2-bd03-357ef948e3c2
	I0428 18:12:01.368031    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:01.368676    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"609","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0428 18:12:01.857520    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:01.857520    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:01.857520    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:01.857520    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:01.861118    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:12:01.861855    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:01.861855    6096 round_trippers.go:580]     Audit-Id: 33cad809-e070-4c7e-bf79-0a3a6ee3c0fd
	I0428 18:12:01.861855    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:01.861855    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:01.861855    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:01.861855    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:01.861855    6096 round_trippers.go:580]     Content-Length: 4030
	I0428 18:12:01.861855    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:01 GMT
	I0428 18:12:01.862243    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"609","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0428 18:12:02.351649    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:02.351649    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:02.351649    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:02.351649    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:02.356622    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:12:02.356764    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:02.356764    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:02.356764    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:02.356764    6096 round_trippers.go:580]     Content-Length: 4030
	I0428 18:12:02.356764    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:02 GMT
	I0428 18:12:02.356764    6096 round_trippers.go:580]     Audit-Id: 00020707-d159-4e53-aa2c-ac5d6ec4a077
	I0428 18:12:02.356764    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:02.356764    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:02.357019    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"609","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0428 18:12:02.858558    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:02.858558    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:02.858558    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:02.858558    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:02.862751    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:12:02.862751    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:02.863528    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:02.863528    6096 round_trippers.go:580]     Content-Length: 4030
	I0428 18:12:02.863528    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:02 GMT
	I0428 18:12:02.863528    6096 round_trippers.go:580]     Audit-Id: 8f37aac4-537b-42f0-ae54-6a5ea4f40afd
	I0428 18:12:02.863528    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:02.863528    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:02.863528    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:02.863663    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"609","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0428 18:12:03.363750    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:03.363827    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:03.363827    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:03.363827    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:03.887983    6096 round_trippers.go:574] Response Status: 200 OK in 524 milliseconds
	I0428 18:12:03.887983    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:03.887983    6096 round_trippers.go:580]     Audit-Id: 172cfd7b-ffab-4d85-82ea-56b3734f4516
	I0428 18:12:03.887983    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:03.887983    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:03.887983    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:03.887983    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:03.887983    6096 round_trippers.go:580]     Content-Length: 4030
	I0428 18:12:03.887983    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:03 GMT
	I0428 18:12:03.888521    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"609","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0428 18:12:03.888806    6096 node_ready.go:53] node "multinode-788600-m02" has status "Ready":"False"
	I0428 18:12:03.888806    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:03.888806    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:03.888806    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:03.888806    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:04.296470    6096 round_trippers.go:574] Response Status: 200 OK in 407 milliseconds
	I0428 18:12:04.296882    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:04.296882    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:04.296882    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:04.296882    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:04.296882    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:04 GMT
	I0428 18:12:04.296882    6096 round_trippers.go:580]     Audit-Id: 57890daa-293b-4abc-a235-a2a0c2892677
	I0428 18:12:04.296882    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:04.297201    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:04.363898    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:04.363898    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:04.363898    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:04.363898    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:04.367873    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:12:04.367873    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:04.367873    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:04.367873    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:04.368228    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:04.368228    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:04 GMT
	I0428 18:12:04.368228    6096 round_trippers.go:580]     Audit-Id: 4ed14371-d0f4-43ec-a6b5-fa9df850ed01
	I0428 18:12:04.368297    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:04.368297    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:04.855293    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:04.855293    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:04.855513    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:04.855513    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:04.859291    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:12:04.859291    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:04.859291    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:04 GMT
	I0428 18:12:04.859291    6096 round_trippers.go:580]     Audit-Id: 78d546ab-b4d2-47eb-9fc6-fbfa4e0195bf
	I0428 18:12:04.859291    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:04.859291    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:04.859291    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:04.859291    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:04.859291    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:05.363882    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:05.363882    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:05.363882    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:05.363882    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:05.366491    6096 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:12:05.367501    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:05.367576    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:05.367576    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:05 GMT
	I0428 18:12:05.367576    6096 round_trippers.go:580]     Audit-Id: 5c612e83-d7d5-49cf-8af1-2370ec8e9abb
	I0428 18:12:05.367638    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:05.367638    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:05.367638    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:05.368365    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:05.858538    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:05.858693    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:05.858769    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:05.858769    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:05.862547    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:12:05.863571    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:05.863642    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:05.863642    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:05.863642    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:05.863642    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:05 GMT
	I0428 18:12:05.863642    6096 round_trippers.go:580]     Audit-Id: dacdc709-d2e6-4ffc-806d-4a4c328b1ce2
	I0428 18:12:05.863642    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:05.863783    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:06.352399    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:06.352879    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:06.352879    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:06.352879    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:06.355268    6096 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:12:06.355268    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:06.355268    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:06.355268    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:06.355268    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:06 GMT
	I0428 18:12:06.355268    6096 round_trippers.go:580]     Audit-Id: 4727dc74-6fb4-4731-8a91-0a74a79375c8
	I0428 18:12:06.355268    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:06.355268    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:06.356058    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:06.356451    6096 node_ready.go:53] node "multinode-788600-m02" has status "Ready":"False"
	I0428 18:12:06.860732    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:06.860732    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:06.860732    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:06.860732    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:06.864396    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:12:06.864442    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:06.864442    6096 round_trippers.go:580]     Audit-Id: cb325068-67b6-46e7-bf7d-4593a5ee9a62
	I0428 18:12:06.864442    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:06.864442    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:06.864442    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:06.864442    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:06.864442    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:06 GMT
	I0428 18:12:06.864442    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:07.355934    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:07.355934    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:07.355934    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:07.355934    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:07.359525    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:12:07.359525    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:07.360370    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:07.360370    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:07 GMT
	I0428 18:12:07.360370    6096 round_trippers.go:580]     Audit-Id: 453b028b-957b-4fb0-a22c-fa3071f84a85
	I0428 18:12:07.360370    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:07.360370    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:07.360370    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:07.360647    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:07.864450    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:07.864516    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:07.864516    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:07.864516    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:07.868363    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:12:07.868363    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:07.868363    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:07.868363    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:07.868363    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:07.868363    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:07 GMT
	I0428 18:12:07.868363    6096 round_trippers.go:580]     Audit-Id: b7434958-6c6d-4a40-8ea0-3c703eb4789a
	I0428 18:12:07.868363    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:07.868363    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:08.352901    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:08.352901    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:08.352901    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:08.352901    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:08.355909    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:12:08.355909    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:08.355909    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:08.355909    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:08 GMT
	I0428 18:12:08.355909    6096 round_trippers.go:580]     Audit-Id: c59cfa04-bdd7-43c1-9b4c-c00780154e13
	I0428 18:12:08.355909    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:08.355909    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:08.355909    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:08.355909    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:08.356587    6096 node_ready.go:53] node "multinode-788600-m02" has status "Ready":"False"
	I0428 18:12:08.859402    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:08.859402    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:08.859402    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:08.859402    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:09.168429    6096 round_trippers.go:574] Response Status: 200 OK in 308 milliseconds
	I0428 18:12:09.168493    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:09.168493    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:09.168493    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:09.168493    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:09.168554    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:09 GMT
	I0428 18:12:09.168554    6096 round_trippers.go:580]     Audit-Id: 7b9e4f08-7a65-49f9-8f4e-4fb4c81ce97d
	I0428 18:12:09.168613    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:09.168945    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:09.358951    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:09.359181    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:09.359181    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:09.359181    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:09.362700    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:12:09.362700    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:09.363664    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:09.363664    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:09 GMT
	I0428 18:12:09.363664    6096 round_trippers.go:580]     Audit-Id: a14d176d-0863-406b-9998-7a716ba627ec
	I0428 18:12:09.363664    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:09.363664    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:09.363729    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:09.363867    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:09.861235    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:09.861235    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:09.861235    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:09.861235    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:09.864757    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:12:09.865663    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:09.865663    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:09 GMT
	I0428 18:12:09.865663    6096 round_trippers.go:580]     Audit-Id: 56ac8c5d-8c44-4d36-9b6b-432fe58c2127
	I0428 18:12:09.865663    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:09.865663    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:09.865738    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:09.865738    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:09.865891    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:10.361226    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:10.361226    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:10.361226    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:10.361226    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:10.363980    6096 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:12:10.364981    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:10.365037    6096 round_trippers.go:580]     Audit-Id: 7adf63ab-4ece-4217-b73b-a14fcb47043a
	I0428 18:12:10.365037    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:10.365037    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:10.365037    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:10.365037    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:10.365037    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:10 GMT
	I0428 18:12:10.365998    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:10.366764    6096 node_ready.go:53] node "multinode-788600-m02" has status "Ready":"False"
	I0428 18:12:10.863682    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:10.863682    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:10.863682    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:10.863682    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:10.868103    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:12:10.868181    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:10.868181    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:10.868456    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:10.868456    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:10.868456    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:10.868456    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:10 GMT
	I0428 18:12:10.868456    6096 round_trippers.go:580]     Audit-Id: f0d6ca03-7153-4f8a-b136-7cfb6bb46517
	I0428 18:12:10.868456    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:11.350273    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:11.350341    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:11.350341    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:11.350341    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:11.353863    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:12:11.353863    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:11.353863    6096 round_trippers.go:580]     Audit-Id: b5239330-355c-407e-afaa-8ed85efe3be2
	I0428 18:12:11.353863    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:11.354167    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:11.354167    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:11.354167    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:11.354167    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:11 GMT
	I0428 18:12:11.354600    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:11.863759    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:11.863759    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:11.863759    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:11.863759    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:11.868072    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:12:11.868112    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:11.868112    6096 round_trippers.go:580]     Audit-Id: 7b560777-7520-468a-b786-d0e6efd53635
	I0428 18:12:11.868112    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:11.868112    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:11.868112    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:11.868112    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:11.868112    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:11 GMT
	I0428 18:12:11.868112    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:12.360858    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:12.361096    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:12.361096    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:12.361096    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:12.366427    6096 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:12:12.366427    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:12.366427    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:12.366427    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:12.366427    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:12.366427    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:12 GMT
	I0428 18:12:12.366427    6096 round_trippers.go:580]     Audit-Id: 80f93b79-8173-4c59-ad74-a0fbdb13761f
	I0428 18:12:12.366427    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:12.366427    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"619","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0428 18:12:12.367651    6096 node_ready.go:53] node "multinode-788600-m02" has status "Ready":"False"
	I0428 18:12:12.862677    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:12.862677    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:12.862677    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:12.862677    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:12.866296    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:12:12.866832    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:12.866832    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:12.866832    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:12 GMT
	I0428 18:12:12.866832    6096 round_trippers.go:580]     Audit-Id: ffd4ed9f-734a-4680-8326-77bf49e117be
	I0428 18:12:12.866923    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:12.866923    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:12.866923    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:12.867207    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"640","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0428 18:12:12.867207    6096 node_ready.go:49] node "multinode-788600-m02" has status "Ready":"True"
	I0428 18:12:12.867773    6096 node_ready.go:38] duration metric: took 18.5187979s for node "multinode-788600-m02" to be "Ready" ...
	I0428 18:12:12.867773    6096 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 18:12:12.868028    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods
	I0428 18:12:12.868028    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:12.868092    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:12.868092    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:12.875216    6096 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0428 18:12:12.875216    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:12.875216    6096 round_trippers.go:580]     Audit-Id: 578e276f-4722-4b69-b843-09cec5259c0d
	I0428 18:12:12.875216    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:12.875216    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:12.875216    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:12.875216    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:12.875216    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:12 GMT
	I0428 18:12:12.876151    6096 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"640"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"442","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70486 chars]
	I0428 18:12:12.880123    6096 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace to be "Ready" ...
	I0428 18:12:12.880123    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:12:12.880123    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:12.880123    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:12.880123    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:12.883202    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:12:12.883202    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:12.883202    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:12.883202    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:12.883202    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:12.883202    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:12 GMT
	I0428 18:12:12.883202    6096 round_trippers.go:580]     Audit-Id: 145b18fa-89f0-403b-a631-5ebf8b97f91a
	I0428 18:12:12.883202    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:12.884298    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"442","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0428 18:12:12.884854    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:12:12.884854    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:12.884854    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:12.884854    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:12.886459    6096 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0428 18:12:12.886459    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:12.887479    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:12.887479    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:12 GMT
	I0428 18:12:12.887479    6096 round_trippers.go:580]     Audit-Id: 975d6093-83dd-49ea-9771-5494f5a2841f
	I0428 18:12:12.887479    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:12.887479    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:12.887479    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:12.887479    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"452","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0428 18:12:12.888239    6096 pod_ready.go:92] pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace has status "Ready":"True"
	I0428 18:12:12.888239    6096 pod_ready.go:81] duration metric: took 8.116ms for pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace to be "Ready" ...
	I0428 18:12:12.888239    6096 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:12:12.888392    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-788600
	I0428 18:12:12.888392    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:12.888392    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:12.888392    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:12.890573    6096 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:12:12.890573    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:12.891178    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:12.891178    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:12.891178    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:12.891178    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:12.891178    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:12 GMT
	I0428 18:12:12.891178    6096 round_trippers.go:580]     Audit-Id: 41add843-72d4-4d8c-ab14-e71c4afb5d43
	I0428 18:12:12.891366    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-788600","namespace":"kube-system","uid":"9d0f8c4f-569f-4a80-8960-2210a5a24612","resourceVersion":"402","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.231.169:2379","kubernetes.io/config.hash":"589ef16acbcd1b3600cffadabab7475a","kubernetes.io/config.mirror":"589ef16acbcd1b3600cffadabab7475a","kubernetes.io/config.seen":"2024-04-29T01:08:48.885063333Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0428 18:12:12.892099    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:12:12.892099    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:12.892160    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:12.892160    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:12.894800    6096 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:12:12.894800    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:12.894800    6096 round_trippers.go:580]     Audit-Id: fa35bf00-b7f8-4a1b-9651-71c9c45a4aa9
	I0428 18:12:12.894800    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:12.894800    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:12.894800    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:12.894800    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:12.894800    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:12 GMT
	I0428 18:12:12.895792    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"452","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0428 18:12:12.896228    6096 pod_ready.go:92] pod "etcd-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:12:12.896292    6096 pod_ready.go:81] duration metric: took 8.0531ms for pod "etcd-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:12:12.896318    6096 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:12:12.896434    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-788600
	I0428 18:12:12.896434    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:12.896434    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:12.896434    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:12.900809    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:12:12.901345    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:12.901437    6096 round_trippers.go:580]     Audit-Id: 1da86003-1d09-49f5-8205-5a9ddfc0bc49
	I0428 18:12:12.901437    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:12.901495    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:12.901495    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:12.901495    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:12.901495    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:12 GMT
	I0428 18:12:12.901706    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-788600","namespace":"kube-system","uid":"e5571b43-6397-459f-b12d-b3d7f5b95eb0","resourceVersion":"404","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.231.169:8443","kubernetes.io/config.hash":"5553c54a41b436754fc14166f7928d5c","kubernetes.io/config.mirror":"5553c54a41b436754fc14166f7928d5c","kubernetes.io/config.seen":"2024-04-29T01:08:48.885068633Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0428 18:12:12.902386    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:12:12.902386    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:12.902386    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:12.902386    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:12.904749    6096 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:12:12.904749    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:12.904749    6096 round_trippers.go:580]     Audit-Id: 976ad374-660c-4395-914d-e16fbf372ca0
	I0428 18:12:12.904749    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:12.904749    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:12.904749    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:12.904749    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:12.904749    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:12 GMT
	I0428 18:12:12.904749    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"452","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0428 18:12:12.904749    6096 pod_ready.go:92] pod "kube-apiserver-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:12:12.904749    6096 pod_ready.go:81] duration metric: took 8.4314ms for pod "kube-apiserver-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:12:12.904749    6096 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:12:12.904749    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:12:12.905966    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:12.906036    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:12.906036    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:12.907612    6096 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0428 18:12:12.907612    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:12.907612    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:12.907612    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:12.907612    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:12.907612    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:12 GMT
	I0428 18:12:12.907612    6096 round_trippers.go:580]     Audit-Id: b83ee448-3da0-4701-9885-feae403d8dd0
	I0428 18:12:12.907612    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:12.908816    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"405","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0428 18:12:12.909344    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:12:12.909344    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:12.909344    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:12.909344    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:12.910601    6096 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0428 18:12:12.910601    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:12.910601    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:12.911537    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:12.911537    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:12 GMT
	I0428 18:12:12.911537    6096 round_trippers.go:580]     Audit-Id: 3cfe86d9-3dac-4ada-a7a3-302e3bdc46c5
	I0428 18:12:12.911537    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:12.911537    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:12.911765    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"452","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0428 18:12:12.912288    6096 pod_ready.go:92] pod "kube-controller-manager-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:12:12.912288    6096 pod_ready.go:81] duration metric: took 7.5388ms for pod "kube-controller-manager-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:12:12.912288    6096 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bkkql" in "kube-system" namespace to be "Ready" ...
	I0428 18:12:13.065213    6096 request.go:629] Waited for 152.7623ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bkkql
	I0428 18:12:13.065307    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bkkql
	I0428 18:12:13.065307    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:13.065307    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:13.065307    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:13.069342    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:12:13.069850    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:13.069850    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:13 GMT
	I0428 18:12:13.069850    6096 round_trippers.go:580]     Audit-Id: b36ca92d-c443-4205-afb5-a5392f78121f
	I0428 18:12:13.069850    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:13.069850    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:13.069936    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:13.069936    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:13.070518    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bkkql","generateName":"kube-proxy-","namespace":"kube-system","uid":"eccd7725-151c-4770-b99c-cb308b31389c","resourceVersion":"397","creationTimestamp":"2024-04-29T01:09:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0428 18:12:13.267938    6096 request.go:629] Waited for 196.5567ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:12:13.268208    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:12:13.268261    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:13.268261    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:13.268261    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:13.274999    6096 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:12:13.275152    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:13.275208    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:13.275208    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:13.275208    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:13 GMT
	I0428 18:12:13.275208    6096 round_trippers.go:580]     Audit-Id: 15b6ade1-7810-48e2-9c3a-9f49b299f36d
	I0428 18:12:13.275208    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:13.275208    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:13.275745    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"452","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0428 18:12:13.275925    6096 pod_ready.go:92] pod "kube-proxy-bkkql" in "kube-system" namespace has status "Ready":"True"
	I0428 18:12:13.276506    6096 pod_ready.go:81] duration metric: took 364.1471ms for pod "kube-proxy-bkkql" in "kube-system" namespace to be "Ready" ...
	I0428 18:12:13.276506    6096 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kc8c4" in "kube-system" namespace to be "Ready" ...
	I0428 18:12:13.470883    6096 request.go:629] Waited for 194.1894ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kc8c4
	I0428 18:12:13.471120    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kc8c4
	I0428 18:12:13.471120    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:13.471189    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:13.471189    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:13.474748    6096 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:12:13.474748    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:13.474748    6096 round_trippers.go:580]     Audit-Id: 5b8dc390-dffe-40dd-a95c-e6919b32a73e
	I0428 18:12:13.474748    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:13.474748    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:13.475265    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:13.475265    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:13.475265    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:13 GMT
	I0428 18:12:13.475446    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kc8c4","generateName":"kube-proxy-","namespace":"kube-system","uid":"340b4c9b-449f-4208-846e-dec867826bf7","resourceVersion":"625","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0428 18:12:13.671786    6096 request.go:629] Waited for 195.6323ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:13.672189    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:12:13.672189    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:13.672189    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:13.672364    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:13.674247    6096 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0428 18:12:13.675232    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:13.675232    6096 round_trippers.go:580]     Audit-Id: e8e82088-6a7c-46b7-b458-c4a561904012
	I0428 18:12:13.675313    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:13.675313    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:13.675313    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:13.675313    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:13.675313    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:13 GMT
	I0428 18:12:13.675626    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"640","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0428 18:12:13.675847    6096 pod_ready.go:92] pod "kube-proxy-kc8c4" in "kube-system" namespace has status "Ready":"True"
	I0428 18:12:13.675847    6096 pod_ready.go:81] duration metric: took 399.3406ms for pod "kube-proxy-kc8c4" in "kube-system" namespace to be "Ready" ...
	I0428 18:12:13.675847    6096 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:12:13.875069    6096 request.go:629] Waited for 199.0574ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-788600
	I0428 18:12:13.875069    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-788600
	I0428 18:12:13.875069    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:13.875069    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:13.875069    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:13.880421    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:12:13.880497    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:13.880497    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:13.880497    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:13.880497    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:13.880497    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:13.880497    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:13 GMT
	I0428 18:12:13.880497    6096 round_trippers.go:580]     Audit-Id: 29afa431-dc56-4b57-9064-89cbfc2ce15f
	I0428 18:12:13.880497    6096 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-788600","namespace":"kube-system","uid":"55bd2888-a3b6-498a-9352-8b15bcc5e545","resourceVersion":"403","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"efaaef68fa82e79d9434de065e5d40a6","kubernetes.io/config.mirror":"efaaef68fa82e79d9434de065e5d40a6","kubernetes.io/config.seen":"2024-04-29T01:08:48.885071033Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0428 18:12:14.077742    6096 request.go:629] Waited for 196.2079ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:12:14.077938    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes/multinode-788600
	I0428 18:12:14.077938    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:14.077938    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:14.077938    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:14.084411    6096 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:12:14.084411    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:14.084411    6096 round_trippers.go:580]     Audit-Id: e4337050-0e58-44ed-accc-db746374a8d1
	I0428 18:12:14.084411    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:14.084411    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:14.084411    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:14.084411    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:14.084411    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:14 GMT
	I0428 18:12:14.085064    6096 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"452","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0428 18:12:14.085651    6096 pod_ready.go:92] pod "kube-scheduler-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:12:14.085832    6096 pod_ready.go:81] duration metric: took 409.9839ms for pod "kube-scheduler-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:12:14.085832    6096 pod_ready.go:38] duration metric: took 1.2180563s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 18:12:14.085832    6096 system_svc.go:44] waiting for kubelet service to be running ....
	I0428 18:12:14.099034    6096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 18:12:14.124254    6096 system_svc.go:56] duration metric: took 38.4223ms WaitForService to wait for kubelet
	I0428 18:12:14.124254    6096 kubeadm.go:576] duration metric: took 20.0446552s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 18:12:14.124254    6096 node_conditions.go:102] verifying NodePressure condition ...
	I0428 18:12:14.265845    6096 request.go:629] Waited for 141.3971ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.231.169:8443/api/v1/nodes
	I0428 18:12:14.266083    6096 round_trippers.go:463] GET https://172.27.231.169:8443/api/v1/nodes
	I0428 18:12:14.266179    6096 round_trippers.go:469] Request Headers:
	I0428 18:12:14.266179    6096 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:12:14.266179    6096 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:12:14.270617    6096 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:12:14.270617    6096 round_trippers.go:577] Response Headers:
	I0428 18:12:14.270617    6096 round_trippers.go:580]     Audit-Id: 56772777-41d4-4d35-a48d-60c71f51b6f6
	I0428 18:12:14.270617    6096 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:12:14.270617    6096 round_trippers.go:580]     Content-Type: application/json
	I0428 18:12:14.270617    6096 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:12:14.270617    6096 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:12:14.270617    6096 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:12:14 GMT
	I0428 18:12:14.271041    6096 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"641"},"items":[{"metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"452","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9269 chars]
	I0428 18:12:14.271926    6096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:12:14.271989    6096 node_conditions.go:123] node cpu capacity is 2
	I0428 18:12:14.271989    6096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:12:14.271989    6096 node_conditions.go:123] node cpu capacity is 2
	I0428 18:12:14.271989    6096 node_conditions.go:105] duration metric: took 147.7345ms to run NodePressure ...
	I0428 18:12:14.271989    6096 start.go:240] waiting for startup goroutines ...
	I0428 18:12:14.272102    6096 start.go:254] writing updated cluster config ...
	I0428 18:12:14.285961    6096 ssh_runner.go:195] Run: rm -f paused
	I0428 18:12:14.432662    6096 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0428 18:12:14.437249    6096 out.go:177] * Done! kubectl is now configured to use "multinode-788600" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 29 01:09:13 multinode-788600 dockerd[1328]: time="2024-04-29T01:09:13.143413847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:09:13 multinode-788600 dockerd[1328]: time="2024-04-29T01:09:13.166433514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 01:09:13 multinode-788600 dockerd[1328]: time="2024-04-29T01:09:13.166623315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 01:09:13 multinode-788600 dockerd[1328]: time="2024-04-29T01:09:13.166643915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:09:13 multinode-788600 dockerd[1328]: time="2024-04-29T01:09:13.166757916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:09:13 multinode-788600 cri-dockerd[1229]: time="2024-04-29T01:09:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20d6a18478fc172d7284034e08c07103cac186aed5aef3e4a8a3ab8091c87992/resolv.conf as [nameserver 172.27.224.1]"
	Apr 29 01:09:13 multinode-788600 cri-dockerd[1229]: time="2024-04-29T01:09:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/70af634f6134dfc001149d7899f5d982015315a2312b19d186fcc20911d8ae65/resolv.conf as [nameserver 172.27.224.1]"
	Apr 29 01:09:13 multinode-788600 dockerd[1328]: time="2024-04-29T01:09:13.551535901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 01:09:13 multinode-788600 dockerd[1328]: time="2024-04-29T01:09:13.551748102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 01:09:13 multinode-788600 dockerd[1328]: time="2024-04-29T01:09:13.551855403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:09:13 multinode-788600 dockerd[1328]: time="2024-04-29T01:09:13.552035404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:09:13 multinode-788600 dockerd[1328]: time="2024-04-29T01:09:13.747640886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 01:09:13 multinode-788600 dockerd[1328]: time="2024-04-29T01:09:13.747988888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 01:09:13 multinode-788600 dockerd[1328]: time="2024-04-29T01:09:13.748068889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:09:13 multinode-788600 dockerd[1328]: time="2024-04-29T01:09:13.748369591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:12:38 multinode-788600 dockerd[1328]: time="2024-04-29T01:12:38.644271909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 01:12:38 multinode-788600 dockerd[1328]: time="2024-04-29T01:12:38.644371109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 01:12:38 multinode-788600 dockerd[1328]: time="2024-04-29T01:12:38.644391209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:12:38 multinode-788600 dockerd[1328]: time="2024-04-29T01:12:38.646360606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:12:38 multinode-788600 cri-dockerd[1229]: time="2024-04-29T01:12:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fcbd24a1db2d897726cc2406fc1aa50c04cfb73959e703974e6b83968a7f6971/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 01:12:40 multinode-788600 cri-dockerd[1229]: time="2024-04-29T01:12:40Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 29 01:12:40 multinode-788600 dockerd[1328]: time="2024-04-29T01:12:40.310180468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 01:12:40 multinode-788600 dockerd[1328]: time="2024-04-29T01:12:40.310967968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 01:12:40 multinode-788600 dockerd[1328]: time="2024-04-29T01:12:40.311739168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:12:40 multinode-788600 dockerd[1328]: time="2024-04-29T01:12:40.312210268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d0d5fbf9b871e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   47 seconds ago      Running             busybox                   0                   fcbd24a1db2d8       busybox-fc5497c4f-4qvlm
	64e6fcf4a3f2f       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   70af634f6134d       coredns-7db6d8ff4d-rp2lx
	16ea9b9acd267       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   20d6a18478fc1       storage-provisioner
	33e59494d8be9       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              4 minutes ago       Running             kindnet-cni               0                   d1342e9d71114       kindnet-52rrh
	8542b2c39cf5b       a0bf559e280cf                                                                                         4 minutes ago       Running             kube-proxy                0                   776d075f3716e       kube-proxy-bkkql
	d55fefd692cfc       259c8277fcbbc                                                                                         4 minutes ago       Running             kube-scheduler            0                   26381d4606b51       kube-scheduler-multinode-788600
	e148c0cdbae01       c42f13656d0b2                                                                                         4 minutes ago       Running             kube-apiserver            0                   038a267a1caf4       kube-apiserver-multinode-788600
	edb2c636ad5d7       c7aad43836fa5                                                                                         4 minutes ago       Running             kube-controller-manager   0                   9ffe1b8b41e4c       kube-controller-manager-multinode-788600
	27388b03fb268       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   8328e1b41d78b       etcd-multinode-788600
	
	
	==> coredns [64e6fcf4a3f2] <==
	[INFO] 10.244.1.2:48437 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001442s
	[INFO] 10.244.0.3:56624 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001063s
	[INFO] 10.244.0.3:53871 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001397s
	[INFO] 10.244.0.3:34178 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001399s
	[INFO] 10.244.0.3:59684 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001391s
	[INFO] 10.244.0.3:35758 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0003144s
	[INFO] 10.244.0.3:54201 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000513s
	[INFO] 10.244.0.3:57683 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000876s
	[INFO] 10.244.0.3:49694 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001237s
	[INFO] 10.244.1.2:48711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229s
	[INFO] 10.244.1.2:37460 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001261s
	[INFO] 10.244.1.2:32950 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001014s
	[INFO] 10.244.1.2:49157 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000511s
	[INFO] 10.244.0.3:49454 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003908s
	[INFO] 10.244.0.3:56632 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000654s
	[INFO] 10.244.0.3:51203 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000936s
	[INFO] 10.244.0.3:53433 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001697s
	[INFO] 10.244.1.2:54748 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001237s
	[INFO] 10.244.1.2:55201 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002599s
	[INFO] 10.244.1.2:45426 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000815s
	[INFO] 10.244.1.2:49822 - 5 "PTR IN 1.224.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001063s
	[INFO] 10.244.0.3:38954 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118s
	[INFO] 10.244.0.3:58102 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002236s
	[INFO] 10.244.0.3:48832 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001238s
	[INFO] 10.244.0.3:49749 - 5 "PTR IN 1.224.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001072s
	
	
	==> describe nodes <==
	Name:               multinode-788600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-788600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=multinode-788600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T18_08_50_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 01:08:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-788600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 01:13:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 01:12:54 +0000   Mon, 29 Apr 2024 01:08:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 01:12:54 +0000   Mon, 29 Apr 2024 01:08:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 01:12:54 +0000   Mon, 29 Apr 2024 01:08:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 01:12:54 +0000   Mon, 29 Apr 2024 01:09:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.231.169
	  Hostname:    multinode-788600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 82dd6ad3dc974090953e528ab9ac6704
	  System UUID:                6f78c2a9-1744-3642-a944-13bbeb7f5c76
	  Boot ID:                    06d0d51f-2bc9-4aab-a844-51e7702061f4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4qvlm                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  kube-system                 coredns-7db6d8ff4d-rp2lx                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m24s
	  kube-system                 etcd-multinode-788600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m38s
	  kube-system                 kindnet-52rrh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m25s
	  kube-system                 kube-apiserver-multinode-788600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 kube-controller-manager-multinode-788600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 kube-proxy-bkkql                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-scheduler-multinode-788600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m21s                  kube-proxy       
	  Normal  Starting                 4m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m46s (x6 over 4m47s)  kubelet          Node multinode-788600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s (x6 over 4m47s)  kubelet          Node multinode-788600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s (x6 over 4m47s)  kubelet          Node multinode-788600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m38s                  kubelet          Node multinode-788600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m38s                  kubelet          Node multinode-788600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m38s                  kubelet          Node multinode-788600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m25s                  node-controller  Node multinode-788600 event: Registered Node multinode-788600 in Controller
	  Normal  NodeReady                4m15s                  kubelet          Node multinode-788600 status is now: NodeReady
	
	
	Name:               multinode-788600-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-788600-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=multinode-788600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_28T18_11_53_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 01:11:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-788600-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 01:13:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 01:12:54 +0000   Mon, 29 Apr 2024 01:11:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 01:12:54 +0000   Mon, 29 Apr 2024 01:11:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 01:12:54 +0000   Mon, 29 Apr 2024 01:11:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 01:12:54 +0000   Mon, 29 Apr 2024 01:12:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.230.221
	  Hostname:    multinode-788600-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f3f256c1ef74f1aabdee6846e11e827
	  System UUID:                ea348b67-6b29-8b46-84e3-ebf01858b203
	  Boot ID:                    23d1db59-b5c6-484d-aa22-1e61e2ff3b17
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4fdn6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 kindnet-hnvm4              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      94s
	  kube-system                 kube-proxy-kc8c4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 82s                kube-proxy       
	  Normal  NodeHasSufficientMemory  94s (x2 over 94s)  kubelet          Node multinode-788600-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s (x2 over 94s)  kubelet          Node multinode-788600-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s (x2 over 94s)  kubelet          Node multinode-788600-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  94s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           90s                node-controller  Node multinode-788600-m02 event: Registered Node multinode-788600-m02 in Controller
	  Normal  NodeReady                75s                kubelet          Node multinode-788600-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.818031] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr29 01:07] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.190816] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[Apr29 01:08] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.101064] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.546930] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.208568] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[  +0.235798] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +2.828685] systemd-fstab-generator[1182]: Ignoring "noauto" option for root device
	[  +0.207127] systemd-fstab-generator[1194]: Ignoring "noauto" option for root device
	[  +0.205793] systemd-fstab-generator[1206]: Ignoring "noauto" option for root device
	[  +0.321942] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[ +12.097831] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[  +0.109384] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.741729] systemd-fstab-generator[1511]: Ignoring "noauto" option for root device
	[  +7.263521] systemd-fstab-generator[1718]: Ignoring "noauto" option for root device
	[  +0.094265] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.067816] systemd-fstab-generator[2126]: Ignoring "noauto" option for root device
	[  +0.143456] kauditd_printk_skb: 62 callbacks suppressed
	[Apr29 01:09] systemd-fstab-generator[2304]: Ignoring "noauto" option for root device
	[  +0.216629] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.543337] kauditd_printk_skb: 51 callbacks suppressed
	[Apr29 01:12] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [27388b03fb26] <==
	{"level":"warn","ts":"2024-04-29T01:12:03.870738Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T01:12:03.335146Z","time spent":"535.562527ms","remote":"127.0.0.1:43798","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":553,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-788600\" mod_revision:575 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-788600\" value_size:496 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-788600\" > >"}
	{"level":"warn","ts":"2024-04-29T01:12:03.872971Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"513.348122ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-788600-m02\" ","response":"range_response_count:1 size:2848"}
	{"level":"info","ts":"2024-04-29T01:12:03.873174Z","caller":"traceutil/trace.go:171","msg":"trace[1090185715] range","detail":"{range_begin:/registry/minions/multinode-788600-m02; range_end:; response_count:1; response_revision:618; }","duration":"513.533921ms","start":"2024-04-29T01:12:03.359585Z","end":"2024-04-29T01:12:03.873119Z","steps":["trace[1090185715] 'agreement among raft nodes before linearized reading'  (duration: 510.966032ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T01:12:03.873294Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T01:12:03.359568Z","time spent":"513.715021ms","remote":"127.0.0.1:43714","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":2871,"request content":"key:\"/registry/minions/multinode-788600-m02\" "}
	{"level":"warn","ts":"2024-04-29T01:12:04.277694Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.769573ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10011940845992786974 > lease_revoke:<id:0af18f27659fbb80>","response":"size:28"}
	{"level":"info","ts":"2024-04-29T01:12:04.278563Z","caller":"traceutil/trace.go:171","msg":"trace[979857718] transaction","detail":"{read_only:false; response_revision:619; number_of_response:1; }","duration":"869.88704ms","start":"2024-04-29T01:12:03.408658Z","end":"2024-04-29T01:12:04.278546Z","steps":["trace[979857718] 'process raft request'  (duration: 869.185243ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T01:12:04.278929Z","caller":"traceutil/trace.go:171","msg":"trace[1968110232] linearizableReadLoop","detail":"{readStateIndex:671; appliedIndex:668; }","duration":"408.572197ms","start":"2024-04-29T01:12:03.870344Z","end":"2024-04-29T01:12:04.278917Z","steps":["trace[1968110232] 'read index received'  (duration: 158.57183ms)","trace[1968110232] 'applied index is now lower than readState.Index'  (duration: 249.999767ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T01:12:04.279669Z","caller":"traceutil/trace.go:171","msg":"trace[954421072] transaction","detail":"{read_only:false; response_revision:620; number_of_response:1; }","duration":"661.019226ms","start":"2024-04-29T01:12:03.618634Z","end":"2024-04-29T01:12:04.279653Z","steps":["trace[954421072] 'process raft request'  (duration: 659.864631ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T01:12:04.280117Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T01:12:03.618614Z","time spent":"661.464325ms","remote":"127.0.0.1:43798","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":569,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-788600-m02\" mod_revision:600 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-788600-m02\" value_size:508 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-788600-m02\" > >"}
	{"level":"warn","ts":"2024-04-29T01:12:04.278833Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T01:12:03.408638Z","time spent":"869.97794ms","remote":"127.0.0.1:43714","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3134,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-788600-m02\" mod_revision:609 > success:<request_put:<key:\"/registry/minions/multinode-788600-m02\" value_size:3088 >> failure:<request_range:<key:\"/registry/minions/multinode-788600-m02\" > >"}
	{"level":"warn","ts":"2024-04-29T01:12:04.282083Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"854.021408ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-04-29T01:12:04.282434Z","caller":"traceutil/trace.go:171","msg":"trace[1555735781] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:620; }","duration":"854.361706ms","start":"2024-04-29T01:12:03.427974Z","end":"2024-04-29T01:12:04.282336Z","steps":["trace[1555735781] 'agreement among raft nodes before linearized reading'  (duration: 853.920008ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T01:12:04.282549Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T01:12:03.427841Z","time spent":"854.697704ms","remote":"127.0.0.1:43698","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1139,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-04-29T01:12:04.282422Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"397.656444ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-788600-m02\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-04-29T01:12:04.284579Z","caller":"traceutil/trace.go:171","msg":"trace[393427673] range","detail":"{range_begin:/registry/minions/multinode-788600-m02; range_end:; response_count:1; response_revision:620; }","duration":"399.828335ms","start":"2024-04-29T01:12:03.884741Z","end":"2024-04-29T01:12:04.28457Z","steps":["trace[393427673] 'agreement among raft nodes before linearized reading'  (duration: 395.410753ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T01:12:04.285178Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T01:12:03.884728Z","time spent":"400.438333ms","remote":"127.0.0.1:43714","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3172,"request content":"key:\"/registry/minions/multinode-788600-m02\" "}
	{"level":"warn","ts":"2024-04-29T01:12:04.283275Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"856.184798ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T01:12:04.287059Z","caller":"traceutil/trace.go:171","msg":"trace[77783226] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:620; }","duration":"860.008683ms","start":"2024-04-29T01:12:03.427041Z","end":"2024-04-29T01:12:04.287049Z","steps":["trace[77783226] 'agreement among raft nodes before linearized reading'  (duration: 855.431402ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T01:12:04.287413Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T01:12:03.427029Z","time spent":"860.373081ms","remote":"127.0.0.1:44732","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-04-29T01:12:09.159455Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"304.24019ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-788600-m02\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-04-29T01:12:09.159707Z","caller":"traceutil/trace.go:171","msg":"trace[1011626683] range","detail":"{range_begin:/registry/minions/multinode-788600-m02; range_end:; response_count:1; response_revision:630; }","duration":"304.36779ms","start":"2024-04-29T01:12:08.855162Z","end":"2024-04-29T01:12:09.15953Z","steps":["trace[1011626683] 'range keys from in-memory index tree'  (duration: 303.916292ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T01:12:09.159718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"318.870136ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T01:12:09.159755Z","caller":"traceutil/trace.go:171","msg":"trace[1636932123] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:630; }","duration":"318.936836ms","start":"2024-04-29T01:12:08.840808Z","end":"2024-04-29T01:12:09.159745Z","steps":["trace[1636932123] 'count revisions from in-memory index tree'  (duration: 318.750336ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T01:12:09.159757Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T01:12:08.855149Z","time spent":"304.596389ms","remote":"127.0.0.1:43714","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3172,"request content":"key:\"/registry/minions/multinode-788600-m02\" "}
	{"level":"warn","ts":"2024-04-29T01:12:09.15978Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T01:12:08.840776Z","time spent":"318.995736ms","remote":"127.0.0.1:44032","response type":"/etcdserverpb.KV/Range","request count":0,"request size":86,"response count":0,"response size":28,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true "}
	
	
	==> kernel <==
	 01:13:27 up 6 min,  0 users,  load average: 0.18, 0.20, 0.10
	Linux multinode-788600 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [33e59494d8be] <==
	I0429 01:12:22.129368       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:12:32.143314       1 main.go:223] Handling node with IPs: map[172.27.231.169:{}]
	I0429 01:12:32.143458       1 main.go:227] handling current node
	I0429 01:12:32.143477       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:12:32.143486       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:12:42.155647       1 main.go:223] Handling node with IPs: map[172.27.231.169:{}]
	I0429 01:12:42.155785       1 main.go:227] handling current node
	I0429 01:12:42.155813       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:12:42.155822       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:12:52.171641       1 main.go:223] Handling node with IPs: map[172.27.231.169:{}]
	I0429 01:12:52.171792       1 main.go:227] handling current node
	I0429 01:12:52.171860       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:12:52.171889       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:13:02.184915       1 main.go:223] Handling node with IPs: map[172.27.231.169:{}]
	I0429 01:13:02.184963       1 main.go:227] handling current node
	I0429 01:13:02.184977       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:13:02.184987       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:13:12.194796       1 main.go:223] Handling node with IPs: map[172.27.231.169:{}]
	I0429 01:13:12.194896       1 main.go:227] handling current node
	I0429 01:13:12.194911       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:13:12.194919       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:13:22.201596       1 main.go:223] Handling node with IPs: map[172.27.231.169:{}]
	I0429 01:13:22.201697       1 main.go:227] handling current node
	I0429 01:13:22.201711       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:13:22.201719       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e148c0cdbae0] <==
	I0429 01:12:03.875810       1 trace.go:236] Trace[1477788174]: "Get" accept:application/json, */*,audit-id:172cfd7b-ffab-4d85-82ea-56b3734f4516,client:172.27.224.1,api-group:,api-version:v1,name:multinode-788600-m02,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/multinode-788600-m02,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:GET (29-Apr-2024 01:12:03.358) (total time: 517ms):
	Trace[1477788174]: ---"About to write a response" 516ms (01:12:03.875)
	Trace[1477788174]: [517.081506ms] [517.081506ms] END
	I0429 01:12:04.283585       1 trace.go:236] Trace[595003130]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:5fbd0368-9df1-434d-a133-ae2da7262007,client:172.27.230.221,api-group:coordination.k8s.io,api-version:v1,name:multinode-788600-m02,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-788600-m02,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PUT (29-Apr-2024 01:12:03.617) (total time: 666ms):
	Trace[595003130]: ["GuaranteedUpdate etcd3" audit-id:5fbd0368-9df1-434d-a133-ae2da7262007,key:/leases/kube-node-lease/multinode-788600-m02,type:*coordination.Lease,resource:leases.coordination.k8s.io 666ms (01:12:03.617)
	Trace[595003130]:  ---"Txn call completed" 665ms (01:12:04.283)]
	Trace[595003130]: [666.504203ms] [666.504203ms] END
	I0429 01:12:04.284293       1 trace.go:236] Trace[1677323455]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:e761a062-d08b-492f-85d1-2666875de6b6,client:172.27.230.221,api-group:,api-version:v1,name:multinode-788600-m02,subresource:status,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/multinode-788600-m02/status,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PATCH (29-Apr-2024 01:12:03.404) (total time: 879ms):
	Trace[1677323455]: ["GuaranteedUpdate etcd3" audit-id:e761a062-d08b-492f-85d1-2666875de6b6,key:/minions/multinode-788600-m02,type:*core.Node,resource:nodes 878ms (01:12:03.405)
	Trace[1677323455]:  ---"Txn call completed" 875ms (01:12:04.283)]
	Trace[1677323455]: ---"Object stored in database" 876ms (01:12:04.283)
	Trace[1677323455]: [879.242301ms] [879.242301ms] END
	I0429 01:12:04.285078       1 trace.go:236] Trace[367025544]: "Get" accept:application/json, */*,audit-id:16af9b6c-31be-4b99-ad6c-b0717a833ad0,client:172.27.231.169,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (29-Apr-2024 01:12:03.427) (total time: 857ms):
	Trace[367025544]: ---"About to write a response" 857ms (01:12:04.284)
	Trace[367025544]: [857.733392ms] [857.733392ms] END
	E0429 01:12:43.718626       1 conn.go:339] Error on socket receive: read tcp 172.27.231.169:8443->172.27.224.1:50095: use of closed network connection
	E0429 01:12:44.185844       1 conn.go:339] Error on socket receive: read tcp 172.27.231.169:8443->172.27.224.1:50097: use of closed network connection
	E0429 01:12:44.729888       1 conn.go:339] Error on socket receive: read tcp 172.27.231.169:8443->172.27.224.1:50099: use of closed network connection
	E0429 01:12:45.190395       1 conn.go:339] Error on socket receive: read tcp 172.27.231.169:8443->172.27.224.1:50101: use of closed network connection
	E0429 01:12:45.630559       1 conn.go:339] Error on socket receive: read tcp 172.27.231.169:8443->172.27.224.1:50103: use of closed network connection
	E0429 01:12:46.079364       1 conn.go:339] Error on socket receive: read tcp 172.27.231.169:8443->172.27.224.1:50105: use of closed network connection
	E0429 01:12:46.890209       1 conn.go:339] Error on socket receive: read tcp 172.27.231.169:8443->172.27.224.1:50109: use of closed network connection
	E0429 01:12:57.350047       1 conn.go:339] Error on socket receive: read tcp 172.27.231.169:8443->172.27.224.1:50111: use of closed network connection
	E0429 01:12:57.792467       1 conn.go:339] Error on socket receive: read tcp 172.27.231.169:8443->172.27.224.1:50113: use of closed network connection
	E0429 01:13:08.233435       1 conn.go:339] Error on socket receive: read tcp 172.27.231.169:8443->172.27.224.1:50115: use of closed network connection
	
	
	==> kube-controller-manager [edb2c636ad5d] <==
	I0429 01:09:02.642385       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 01:09:02.675396       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 01:09:02.675432       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 01:09:03.247786       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="1.028932639s"
	I0429 01:09:03.300031       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.140691ms"
	I0429 01:09:03.333282       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.276204ms"
	I0429 01:09:03.333380       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.201µs"
	I0429 01:09:03.334746       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="632.906µs"
	I0429 01:09:12.497374       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.7µs"
	I0429 01:09:12.533081       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.8µs"
	I0429 01:09:14.893791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72µs"
	I0429 01:09:14.941059       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.346541ms"
	I0429 01:09:14.942008       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.7µs"
	I0429 01:09:17.024665       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 01:11:53.161790       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-788600-m02\" does not exist"
	I0429 01:11:53.177770       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-788600-m02" podCIDRs=["10.244.1.0/24"]
	I0429 01:11:57.056826       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-788600-m02"
	I0429 01:12:12.447989       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-788600-m02"
	I0429 01:12:38.086505       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.050872ms"
	I0429 01:12:38.156586       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.927316ms"
	I0429 01:12:38.156985       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.8µs"
	I0429 01:12:40.843412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.957702ms"
	I0429 01:12:40.844132       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.3µs"
	I0429 01:12:40.953439       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.253802ms"
	I0429 01:12:40.953522       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.8µs"
	
	
	==> kube-proxy [8542b2c39cf5] <==
	I0429 01:09:05.708863       1 server_linux.go:69] "Using iptables proxy"
	I0429 01:09:05.742050       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.231.169"]
	I0429 01:09:05.825870       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 01:09:05.825916       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 01:09:05.826023       1 server_linux.go:165] "Using iptables Proxier"
	I0429 01:09:05.838937       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 01:09:05.840502       1 server.go:872] "Version info" version="v1.30.0"
	I0429 01:09:05.840525       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 01:09:05.843961       1 config.go:192] "Starting service config controller"
	I0429 01:09:05.846365       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 01:09:05.846409       1 config.go:319] "Starting node config controller"
	I0429 01:09:05.846416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 01:09:05.849462       1 config.go:101] "Starting endpoint slice config controller"
	I0429 01:09:05.849804       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 01:09:05.946586       1 shared_informer.go:320] Caches are synced for node config
	I0429 01:09:05.946631       1 shared_informer.go:320] Caches are synced for service config
	I0429 01:09:05.953363       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d55fefd692cf] <==
	W0429 01:08:46.888044       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 01:08:46.888518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 01:08:47.003501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 01:08:47.003561       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 01:08:47.057469       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 01:08:47.059611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 01:08:47.081787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 01:08:47.082341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 01:08:47.119979       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 01:08:47.120206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 01:08:47.214340       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 01:08:47.214395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 01:08:47.226615       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 01:08:47.226976       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 01:08:47.234210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 01:08:47.234301       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 01:08:47.252946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 01:08:47.253198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 01:08:47.278229       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 01:08:47.278421       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 01:08:47.396441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 01:08:47.396483       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 01:08:47.456293       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 01:08:47.456674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0429 01:08:49.334502       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 01:09:14 multinode-788600 kubelet[2133]: I0429 01:09:14.895606    2133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=4.895598193 podStartE2EDuration="4.895598193s" podCreationTimestamp="2024-04-29 01:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 01:09:13.75112751 +0000 UTC m=+25.006073250" watchObservedRunningTime="2024-04-29 01:09:14.895598193 +0000 UTC m=+26.150543833"
	Apr 29 01:09:49 multinode-788600 kubelet[2133]: E0429 01:09:49.087007    2133 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 01:09:49 multinode-788600 kubelet[2133]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 01:09:49 multinode-788600 kubelet[2133]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 01:09:49 multinode-788600 kubelet[2133]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 01:09:49 multinode-788600 kubelet[2133]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 01:10:49 multinode-788600 kubelet[2133]: E0429 01:10:49.078815    2133 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 01:10:49 multinode-788600 kubelet[2133]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 01:10:49 multinode-788600 kubelet[2133]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 01:10:49 multinode-788600 kubelet[2133]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 01:10:49 multinode-788600 kubelet[2133]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 01:11:49 multinode-788600 kubelet[2133]: E0429 01:11:49.079369    2133 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 01:11:49 multinode-788600 kubelet[2133]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 01:11:49 multinode-788600 kubelet[2133]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 01:11:49 multinode-788600 kubelet[2133]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 01:11:49 multinode-788600 kubelet[2133]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 01:12:38 multinode-788600 kubelet[2133]: I0429 01:12:38.077847    2133 topology_manager.go:215] "Topology Admit Handler" podUID="a724a733-4b18-4f15-8918-9fe472fcd02c" podNamespace="default" podName="busybox-fc5497c4f-4qvlm"
	Apr 29 01:12:38 multinode-788600 kubelet[2133]: I0429 01:12:38.245317    2133 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bcbl\" (UniqueName: \"kubernetes.io/projected/a724a733-4b18-4f15-8918-9fe472fcd02c-kube-api-access-5bcbl\") pod \"busybox-fc5497c4f-4qvlm\" (UID: \"a724a733-4b18-4f15-8918-9fe472fcd02c\") " pod="default/busybox-fc5497c4f-4qvlm"
	Apr 29 01:12:38 multinode-788600 kubelet[2133]: I0429 01:12:38.851580    2133 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcbd24a1db2d897726cc2406fc1aa50c04cfb73959e703974e6b83968a7f6971"
	Apr 29 01:12:44 multinode-788600 kubelet[2133]: E0429 01:12:44.186837    2133 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55950->127.0.0.1:35855: write tcp 127.0.0.1:55950->127.0.0.1:35855: write: broken pipe
	Apr 29 01:12:49 multinode-788600 kubelet[2133]: E0429 01:12:49.079055    2133 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 01:12:49 multinode-788600 kubelet[2133]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 01:12:49 multinode-788600 kubelet[2133]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 01:12:49 multinode-788600 kubelet[2133]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 01:12:49 multinode-788600 kubelet[2133]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 18:13:19.911725     256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-788600 -n multinode-788600
E0428 18:13:41.005731    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-788600 -n multinode-788600: (11.4379298s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-788600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (55.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (439.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-788600
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-788600
E0428 18:28:41.019056    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-788600: (1m35.7901316s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-788600 --wait=true -v=8 --alsologtostderr
E0428 18:30:36.433013    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 18:31:44.232282    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 18:33:39.635597    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 18:33:41.009225    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-788600 --wait=true -v=8 --alsologtostderr: exit status 90 (5m9.0988745s)

                                                
                                                
-- stdout --
	* [multinode-788600] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17977
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-788600" primary control-plane node in "multinode-788600" cluster
	* Restarting existing hyperv VM for "multinode-788600" ...
	* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-788600-m02" worker node in "multinode-788600" cluster
	* Restarting existing hyperv VM for "multinode-788600-m02" ...
	* Found network options:
	  - NO_PROXY=172.27.239.170
	  - NO_PROXY=172.27.239.170
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 18:29:06.807995    5100 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0428 18:29:06.809727    5100 out.go:291] Setting OutFile to fd 1908 ...
	I0428 18:29:06.810353    5100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 18:29:06.810353    5100 out.go:304] Setting ErrFile to fd 1912...
	I0428 18:29:06.810353    5100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 18:29:06.834778    5100 out.go:298] Setting JSON to false
	I0428 18:29:06.838611    5100 start.go:129] hostinfo: {"hostname":"minikube1","uptime":11589,"bootTime":1714342556,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 18:29:06.838611    5100 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 18:29:06.940529    5100 out.go:177] * [multinode-788600] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 18:29:07.030586    5100 notify.go:220] Checking for updates...
	I0428 18:29:07.077632    5100 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:29:07.374230    5100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 18:29:07.485070    5100 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 18:29:07.638229    5100 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 18:29:07.772014    5100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 18:29:07.826039    5100 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:29:07.826481    5100 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 18:29:13.079444    5100 out.go:177] * Using the hyperv driver based on existing profile
	I0428 18:29:13.183795    5100 start.go:297] selected driver: hyperv
	I0428 18:29:13.183795    5100 start.go:901] validating driver "hyperv" against &{Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.231.169 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.230.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.237.64 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 18:29:13.184921    5100 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 18:29:13.238392    5100 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 18:29:13.239401    5100 cni.go:84] Creating CNI manager for ""
	I0428 18:29:13.239401    5100 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0428 18:29:13.239658    5100 start.go:340] cluster config:
	{Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-788600 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.231.169 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.230.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.237.64 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 18:29:13.239658    5100 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 18:29:13.267965    5100 out.go:177] * Starting "multinode-788600" primary control-plane node in "multinode-788600" cluster
	I0428 18:29:13.273325    5100 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 18:29:13.273757    5100 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 18:29:13.273855    5100 cache.go:56] Caching tarball of preloaded images
	I0428 18:29:13.274319    5100 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 18:29:13.274564    5100 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 18:29:13.274592    5100 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:29:13.277394    5100 start.go:360] acquireMachinesLock for multinode-788600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 18:29:13.277394    5100 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-788600"
	I0428 18:29:13.278010    5100 start.go:96] Skipping create...Using existing machine configuration
	I0428 18:29:13.278010    5100 fix.go:54] fixHost starting: 
	I0428 18:29:13.278669    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:15.841355    5100 main.go:141] libmachine: [stdout =====>] : Off
	
	I0428 18:29:15.841355    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:15.841437    5100 fix.go:112] recreateIfNeeded on multinode-788600: state=Stopped err=<nil>
	W0428 18:29:15.841437    5100 fix.go:138] unexpected machine state, will restart: <nil>
	I0428 18:29:15.844029    5100 out.go:177] * Restarting existing hyperv VM for "multinode-788600" ...
	I0428 18:29:15.847206    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-788600
	I0428 18:29:18.788290    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:29:18.788290    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:18.788290    5100 main.go:141] libmachine: Waiting for host to start...
	I0428 18:29:18.788290    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:20.894990    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:20.894990    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:20.894990    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:23.329935    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:29:23.329986    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:24.337456    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:26.424769    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:26.424769    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:26.424959    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:28.835446    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:29:28.835446    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:29.845210    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:31.915507    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:31.915507    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:31.916194    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:34.321357    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:29:34.321830    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:35.322335    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:37.477391    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:37.477391    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:37.477391    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:39.926983    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:29:39.926983    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:40.928783    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:43.017582    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:43.018601    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:43.018670    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:45.467215    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:29:45.467701    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:45.470855    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:47.452061    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:47.453391    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:47.453481    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:49.918620    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:29:49.918620    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:49.919129    5100 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:29:49.921224    5100 machine.go:94] provisionDockerMachine start ...
	I0428 18:29:49.921854    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:51.906534    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:51.906962    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:51.906962    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:54.344777    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:29:54.345162    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:54.351253    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:29:54.351970    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:29:54.351970    5100 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 18:29:54.482939    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 18:29:54.483063    5100 buildroot.go:166] provisioning hostname "multinode-788600"
	I0428 18:29:54.483182    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:56.467562    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:56.467562    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:56.467562    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:58.861415    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:29:58.861500    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:58.866474    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:29:58.867158    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:29:58.867158    5100 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-788600 && echo "multinode-788600" | sudo tee /etc/hostname
	I0428 18:29:59.026469    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-788600
	
	I0428 18:29:59.027057    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:01.078535    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:01.078960    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:01.079062    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:03.473105    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:03.473105    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:03.480109    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:03.480643    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:03.480643    5100 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-788600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-788600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-788600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 18:30:03.632326    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 18:30:03.632436    5100 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 18:30:03.632436    5100 buildroot.go:174] setting up certificates
	I0428 18:30:03.632533    5100 provision.go:84] configureAuth start
	I0428 18:30:03.632662    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:05.623591    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:05.623591    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:05.623674    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:07.995919    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:07.996008    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:07.996008    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:09.994705    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:09.994705    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:09.994978    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:12.476810    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:12.476810    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:12.476810    5100 provision.go:143] copyHostCerts
	I0428 18:30:12.477065    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 18:30:12.477065    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 18:30:12.477065    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 18:30:12.477997    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 18:30:12.479104    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 18:30:12.479438    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 18:30:12.479438    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 18:30:12.479915    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 18:30:12.480977    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 18:30:12.481170    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 18:30:12.481170    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 18:30:12.481170    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 18:30:12.482569    5100 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-788600 san=[127.0.0.1 172.27.239.170 localhost minikube multinode-788600]
	I0428 18:30:12.565240    5100 provision.go:177] copyRemoteCerts
	I0428 18:30:12.578456    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 18:30:12.578546    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:14.563247    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:14.563247    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:14.564084    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:17.004731    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:17.004884    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:17.005001    5100 sshutil.go:53] new ssh client: &{IP:172.27.239.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:30:17.120514    5100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5420479s)
	I0428 18:30:17.120569    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 18:30:17.121103    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 18:30:17.169984    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 18:30:17.170584    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0428 18:30:17.216472    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 18:30:17.216472    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0428 18:30:17.262921    5100 provision.go:87] duration metric: took 13.630358s to configureAuth
	I0428 18:30:17.262921    5100 buildroot.go:189] setting minikube options for container-runtime
	I0428 18:30:17.263897    5100 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:30:17.264012    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:19.259871    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:19.259871    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:19.260050    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:21.723377    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:21.723454    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:21.729319    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:21.730083    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:21.730083    5100 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 18:30:21.872016    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 18:30:21.872016    5100 buildroot.go:70] root file system type: tmpfs
	I0428 18:30:21.872016    5100 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 18:30:21.872016    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:23.896924    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:23.896924    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:23.896924    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:26.313949    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:26.313949    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:26.322783    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:26.322938    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:26.322938    5100 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 18:30:26.486115    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 18:30:26.486115    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:28.470749    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:28.470749    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:28.470749    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:30.893142    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:30.893142    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:30.900075    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:30.900075    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:30.900075    5100 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 18:30:33.420018    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 18:30:33.420018    5100 machine.go:97] duration metric: took 43.498168s to provisionDockerMachine
	I0428 18:30:33.420018    5100 start.go:293] postStartSetup for "multinode-788600" (driver="hyperv")
	I0428 18:30:33.420018    5100 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 18:30:33.433580    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 18:30:33.433580    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:35.421597    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:35.421597    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:35.421967    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:37.810277    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:37.811012    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:37.811315    5100 sshutil.go:53] new ssh client: &{IP:172.27.239.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:30:37.920287    5100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4866971s)
	I0428 18:30:37.932767    5100 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 18:30:37.939254    5100 command_runner.go:130] > NAME=Buildroot
	I0428 18:30:37.939254    5100 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0428 18:30:37.939254    5100 command_runner.go:130] > ID=buildroot
	I0428 18:30:37.939254    5100 command_runner.go:130] > VERSION_ID=2023.02.9
	I0428 18:30:37.939254    5100 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0428 18:30:37.939254    5100 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 18:30:37.939254    5100 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 18:30:37.939952    5100 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 18:30:37.940475    5100 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 18:30:37.940475    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 18:30:37.952512    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 18:30:37.969990    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 18:30:38.017497    5100 start.go:296] duration metric: took 4.5974689s for postStartSetup
	I0428 18:30:38.018511    5100 fix.go:56] duration metric: took 1m24.7403132s for fixHost
	I0428 18:30:38.018511    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:40.002285    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:40.002569    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:40.002569    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:42.426765    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:42.427054    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:42.433213    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:42.433408    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:42.433408    5100 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0428 18:30:42.568495    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714354242.563104735
	
	I0428 18:30:42.568584    5100 fix.go:216] guest clock: 1714354242.563104735
	I0428 18:30:42.568584    5100 fix.go:229] Guest: 2024-04-28 18:30:42.563104735 -0700 PDT Remote: 2024-04-28 18:30:38.018511 -0700 PDT m=+91.312813201 (delta=4.544593735s)
	I0428 18:30:42.568783    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:44.528614    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:44.528614    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:44.529235    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:46.913452    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:46.913716    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:46.920153    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:46.920882    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:46.921041    5100 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714354242
	I0428 18:30:47.066116    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 01:30:42 UTC 2024
	
	I0428 18:30:47.066116    5100 fix.go:236] clock set: Mon Apr 29 01:30:42 UTC 2024
	 (err=<nil>)
	I0428 18:30:47.066675    5100 start.go:83] releasing machines lock for "multinode-788600", held for 1m33.788514s
	I0428 18:30:47.066769    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:49.059891    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:49.060388    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:49.060388    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:51.541826    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:51.541826    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:51.545987    5100 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 18:30:51.546223    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:51.556964    5100 ssh_runner.go:195] Run: cat /version.json
	I0428 18:30:51.556964    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:53.612244    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:53.612244    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:53.612244    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:53.622682    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:53.622789    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:53.622943    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:56.119241    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:56.119241    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:56.120395    5100 sshutil.go:53] new ssh client: &{IP:172.27.239.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:30:56.153523    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:56.154538    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:56.154788    5100 sshutil.go:53] new ssh client: &{IP:172.27.239.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:30:56.212733    5100 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0428 18:30:56.212822    5100 ssh_runner.go:235] Completed: cat /version.json: (4.6558463s)
	I0428 18:30:56.227331    5100 ssh_runner.go:195] Run: systemctl --version
	I0428 18:30:56.298961    5100 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0428 18:30:56.299087    5100 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7530013s)
	I0428 18:30:56.299087    5100 command_runner.go:130] > systemd 252 (252)
	I0428 18:30:56.299087    5100 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0428 18:30:56.311091    5100 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 18:30:56.322712    5100 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0428 18:30:56.323363    5100 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 18:30:56.335996    5100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 18:30:56.368726    5100 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0428 18:30:56.368854    5100 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 18:30:56.368894    5100 start.go:494] detecting cgroup driver to use...
	I0428 18:30:56.369158    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 18:30:56.408119    5100 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0428 18:30:56.420239    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 18:30:56.450407    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 18:30:56.468615    5100 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 18:30:56.483087    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 18:30:56.518413    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 18:30:56.551580    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 18:30:56.590655    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 18:30:56.627626    5100 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 18:30:56.668610    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 18:30:56.707360    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 18:30:56.741109    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 18:30:56.772199    5100 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 18:30:56.789910    5100 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0428 18:30:56.802591    5100 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 18:30:56.831586    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:30:57.029306    5100 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 18:30:57.065129    5100 start.go:494] detecting cgroup driver to use...
	I0428 18:30:57.081225    5100 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 18:30:57.104967    5100 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0428 18:30:57.104967    5100 command_runner.go:130] > [Unit]
	I0428 18:30:57.104967    5100 command_runner.go:130] > Description=Docker Application Container Engine
	I0428 18:30:57.104967    5100 command_runner.go:130] > Documentation=https://docs.docker.com
	I0428 18:30:57.105037    5100 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0428 18:30:57.105037    5100 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0428 18:30:57.105037    5100 command_runner.go:130] > StartLimitBurst=3
	I0428 18:30:57.105073    5100 command_runner.go:130] > StartLimitIntervalSec=60
	I0428 18:30:57.105073    5100 command_runner.go:130] > [Service]
	I0428 18:30:57.105117    5100 command_runner.go:130] > Type=notify
	I0428 18:30:57.105117    5100 command_runner.go:130] > Restart=on-failure
	I0428 18:30:57.105117    5100 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0428 18:30:57.105156    5100 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0428 18:30:57.105156    5100 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0428 18:30:57.105210    5100 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0428 18:30:57.105210    5100 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0428 18:30:57.105250    5100 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0428 18:30:57.105250    5100 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0428 18:30:57.105301    5100 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0428 18:30:57.105357    5100 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0428 18:30:57.105357    5100 command_runner.go:130] > ExecStart=
	I0428 18:30:57.105357    5100 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0428 18:30:57.105357    5100 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0428 18:30:57.105357    5100 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0428 18:30:57.105357    5100 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0428 18:30:57.105357    5100 command_runner.go:130] > LimitNOFILE=infinity
	I0428 18:30:57.105357    5100 command_runner.go:130] > LimitNPROC=infinity
	I0428 18:30:57.105357    5100 command_runner.go:130] > LimitCORE=infinity
	I0428 18:30:57.105357    5100 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0428 18:30:57.105357    5100 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0428 18:30:57.105357    5100 command_runner.go:130] > TasksMax=infinity
	I0428 18:30:57.105357    5100 command_runner.go:130] > TimeoutStartSec=0
	I0428 18:30:57.105357    5100 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0428 18:30:57.105357    5100 command_runner.go:130] > Delegate=yes
	I0428 18:30:57.105357    5100 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0428 18:30:57.105357    5100 command_runner.go:130] > KillMode=process
	I0428 18:30:57.105357    5100 command_runner.go:130] > [Install]
	I0428 18:30:57.105357    5100 command_runner.go:130] > WantedBy=multi-user.target
	I0428 18:30:57.118659    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 18:30:57.153965    5100 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 18:30:57.204253    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 18:30:57.240015    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 18:30:57.277276    5100 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 18:30:57.345718    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 18:30:57.371346    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 18:30:57.409737    5100 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0428 18:30:57.423205    5100 ssh_runner.go:195] Run: which cri-dockerd
	I0428 18:30:57.430233    5100 command_runner.go:130] > /usr/bin/cri-dockerd
	I0428 18:30:57.441325    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 18:30:57.458054    5100 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 18:30:57.502947    5100 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 18:30:57.700154    5100 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 18:30:57.882896    5100 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 18:30:57.883180    5100 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 18:30:57.927721    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:30:58.124953    5100 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 18:31:00.770105    5100 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6450046s)
	I0428 18:31:00.781386    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0428 18:31:00.815860    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 18:31:00.858671    5100 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0428 18:31:01.050250    5100 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0428 18:31:01.245194    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:31:01.445475    5100 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0428 18:31:01.496426    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 18:31:01.534763    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:31:01.718829    5100 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0428 18:31:01.836605    5100 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0428 18:31:01.857291    5100 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0428 18:31:01.874846    5100 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0428 18:31:01.874846    5100 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0428 18:31:01.874846    5100 command_runner.go:130] > Device: 0,22	Inode: 858         Links: 1
	I0428 18:31:01.874846    5100 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0428 18:31:01.874846    5100 command_runner.go:130] > Access: 2024-04-29 01:31:01.743369559 +0000
	I0428 18:31:01.874846    5100 command_runner.go:130] > Modify: 2024-04-29 01:31:01.743369559 +0000
	I0428 18:31:01.874846    5100 command_runner.go:130] > Change: 2024-04-29 01:31:01.748369612 +0000
	I0428 18:31:01.874846    5100 command_runner.go:130] >  Birth: -
	I0428 18:31:01.874846    5100 start.go:562] Will wait 60s for crictl version
	I0428 18:31:01.887754    5100 ssh_runner.go:195] Run: which crictl
	I0428 18:31:01.894982    5100 command_runner.go:130] > /usr/bin/crictl
	I0428 18:31:01.907488    5100 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 18:31:01.975356    5100 command_runner.go:130] > Version:  0.1.0
	I0428 18:31:01.975356    5100 command_runner.go:130] > RuntimeName:  docker
	I0428 18:31:01.975356    5100 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0428 18:31:01.975356    5100 command_runner.go:130] > RuntimeApiVersion:  v1
	I0428 18:31:01.975356    5100 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0428 18:31:01.984920    5100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 18:31:02.021960    5100 command_runner.go:130] > 26.0.2
	I0428 18:31:02.031724    5100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 18:31:02.062921    5100 command_runner.go:130] > 26.0.2
	I0428 18:31:02.067738    5100 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0428 18:31:02.067738    5100 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0428 18:31:02.069125    5100 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0428 18:31:02.072991    5100 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0428 18:31:02.072991    5100 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0428 18:31:02.072991    5100 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d9:46:b5 Flags:up|broadcast|multicast|running}
	I0428 18:31:02.073587    5100 ip.go:210] interface addr: fe80::ddcc:c7f1:f829:ae2f/64
	I0428 18:31:02.073587    5100 ip.go:210] interface addr: 172.27.224.1/20
	I0428 18:31:02.090160    5100 ssh_runner.go:195] Run: grep 172.27.224.1	host.minikube.internal$ /etc/hosts
	I0428 18:31:02.096353    5100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 18:31:02.117037    5100 kubeadm.go:877] updating cluster {Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.239.170 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.230.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.237.64 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 18:31:02.117328    5100 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 18:31:02.126708    5100 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0428 18:31:02.150678    5100 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0428 18:31:02.151177    5100 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 18:31:02.151177    5100 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0428 18:31:02.151177    5100 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0428 18:31:02.151177    5100 docker.go:615] Images already preloaded, skipping extraction
	I0428 18:31:02.161895    5100 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0428 18:31:02.183468    5100 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0428 18:31:02.183468    5100 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 18:31:02.183468    5100 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0428 18:31:02.183468    5100 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0428 18:31:02.183468    5100 cache_images.go:84] Images are preloaded, skipping loading
	I0428 18:31:02.183468    5100 kubeadm.go:928] updating node { 172.27.239.170 8443 v1.30.0 docker true true} ...
	I0428 18:31:02.183468    5100 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-788600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.239.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 18:31:02.192446    5100 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0428 18:31:02.227627    5100 command_runner.go:130] > cgroupfs
	I0428 18:31:02.227627    5100 cni.go:84] Creating CNI manager for ""
	I0428 18:31:02.227627    5100 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0428 18:31:02.227627    5100 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 18:31:02.227627    5100 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.239.170 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-788600 NodeName:multinode-788600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.239.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.239.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 18:31:02.228352    5100 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.239.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-788600"
	  kubeletExtraArgs:
	    node-ip: 172.27.239.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.239.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 18:31:02.243724    5100 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 18:31:02.263782    5100 command_runner.go:130] > kubeadm
	I0428 18:31:02.263782    5100 command_runner.go:130] > kubectl
	I0428 18:31:02.263782    5100 command_runner.go:130] > kubelet
	I0428 18:31:02.263782    5100 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 18:31:02.277865    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0428 18:31:02.295334    5100 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0428 18:31:02.327593    5100 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 18:31:02.355898    5100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0428 18:31:02.400601    5100 ssh_runner.go:195] Run: grep 172.27.239.170	control-plane.minikube.internal$ /etc/hosts
	I0428 18:31:02.407693    5100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.239.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 18:31:02.442067    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:31:02.626741    5100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 18:31:02.665784    5100 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600 for IP: 172.27.239.170
	I0428 18:31:02.665784    5100 certs.go:194] generating shared ca certs ...
	I0428 18:31:02.665784    5100 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:02.666397    5100 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0428 18:31:02.667047    5100 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0428 18:31:02.667047    5100 certs.go:256] generating profile certs ...
	I0428 18:31:02.667730    5100 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\client.key
	I0428 18:31:02.668417    5100 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key.bf279c66
	I0428 18:31:02.668505    5100 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt.bf279c66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.239.170]
	I0428 18:31:03.091055    5100 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt.bf279c66 ...
	I0428 18:31:03.091055    5100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt.bf279c66: {Name:mkaf1a9c903a6c9cf9004a34772c2d8b3ee15342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:03.093044    5100 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key.bf279c66 ...
	I0428 18:31:03.093044    5100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key.bf279c66: {Name:mk024a6f259c1625f6490ba1e52b63b460f3073d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:03.094536    5100 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt.bf279c66 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt
	I0428 18:31:03.107123    5100 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key.bf279c66 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key
	I0428 18:31:03.109129    5100 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.key
	I0428 18:31:03.109129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 18:31:03.109129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0428 18:31:03.109129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 18:31:03.109129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 18:31:03.110129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 18:31:03.110129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 18:31:03.110129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 18:31:03.110129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 18:31:03.110129    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem (1338 bytes)
	W0428 18:31:03.111127    5100 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228_empty.pem, impossibly tiny 0 bytes
	I0428 18:31:03.111127    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0428 18:31:03.111127    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0428 18:31:03.111127    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0428 18:31:03.112121    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0428 18:31:03.112121    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem (1708 bytes)
	I0428 18:31:03.112121    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem -> /usr/share/ca-certificates/3228.pem
	I0428 18:31:03.112121    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /usr/share/ca-certificates/32282.pem
	I0428 18:31:03.112121    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:31:03.113143    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 18:31:03.164538    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0428 18:31:03.213913    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 18:31:03.259463    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 18:31:03.307159    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0428 18:31:03.356708    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0428 18:31:03.409218    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 18:31:03.461775    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0428 18:31:03.502141    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem --> /usr/share/ca-certificates/3228.pem (1338 bytes)
	I0428 18:31:03.549108    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /usr/share/ca-certificates/32282.pem (1708 bytes)
	I0428 18:31:03.597203    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 18:31:03.642354    5100 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 18:31:03.686876    5100 ssh_runner.go:195] Run: openssl version
	I0428 18:31:03.696135    5100 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0428 18:31:03.708139    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3228.pem && ln -fs /usr/share/ca-certificates/3228.pem /etc/ssl/certs/3228.pem"
	I0428 18:31:03.745183    5100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3228.pem
	I0428 18:31:03.753163    5100 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 18:31:03.753526    5100 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 18:31:03.765193    5100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3228.pem
	I0428 18:31:03.774235    5100 command_runner.go:130] > 51391683
	I0428 18:31:03.786397    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3228.pem /etc/ssl/certs/51391683.0"
	I0428 18:31:03.814386    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32282.pem && ln -fs /usr/share/ca-certificates/32282.pem /etc/ssl/certs/32282.pem"
	I0428 18:31:03.850195    5100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32282.pem
	I0428 18:31:03.857810    5100 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 18:31:03.857810    5100 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 18:31:03.870129    5100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32282.pem
	I0428 18:31:03.878498    5100 command_runner.go:130] > 3ec20f2e
	I0428 18:31:03.890751    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32282.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 18:31:03.922266    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 18:31:03.952546    5100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:31:03.960640    5100 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:31:03.960640    5100 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:31:03.973542    5100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:31:03.982547    5100 command_runner.go:130] > b5213941
	I0428 18:31:03.992543    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 18:31:04.020878    5100 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 18:31:04.027800    5100 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 18:31:04.027800    5100 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0428 18:31:04.027800    5100 command_runner.go:130] > Device: 8,1	Inode: 9431378     Links: 1
	I0428 18:31:04.027800    5100 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0428 18:31:04.027800    5100 command_runner.go:130] > Access: 2024-04-29 01:08:36.420738580 +0000
	I0428 18:31:04.027800    5100 command_runner.go:130] > Modify: 2024-04-29 01:08:36.420738580 +0000
	I0428 18:31:04.027800    5100 command_runner.go:130] > Change: 2024-04-29 01:08:36.420738580 +0000
	I0428 18:31:04.027800    5100 command_runner.go:130] >  Birth: 2024-04-29 01:08:36.420738580 +0000
	I0428 18:31:04.039221    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0428 18:31:04.049656    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.061648    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0428 18:31:04.075450    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.089519    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0428 18:31:04.099116    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.110882    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0428 18:31:04.120974    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.133464    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0428 18:31:04.146142    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.158268    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0428 18:31:04.167665    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.168193    5100 kubeadm.go:391] StartCluster: {Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.239.170 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.230.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.237.64 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 18:31:04.178224    5100 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 18:31:04.213190    5100 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 18:31:04.233991    5100 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0428 18:31:04.233991    5100 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0428 18:31:04.233991    5100 command_runner.go:130] > /var/lib/minikube/etcd:
	I0428 18:31:04.233991    5100 command_runner.go:130] > member
	W0428 18:31:04.233991    5100 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0428 18:31:04.233991    5100 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0428 18:31:04.233991    5100 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0428 18:31:04.244993    5100 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0428 18:31:04.263105    5100 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0428 18:31:04.263871    5100 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-788600" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:31:04.264562    5100 kubeconfig.go:62] C:\Users\jenkins.minikube1\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-788600" cluster setting kubeconfig missing "multinode-788600" context setting]
	I0428 18:31:04.265326    5100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:04.279100    5100 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:31:04.279824    5100 kapi.go:59] client config for multinode-788600: &rest.Config{Host:"https://172.27.239.170:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-788600/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-788600/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 18:31:04.281162    5100 cert_rotation.go:137] Starting client certificate rotation controller
	I0428 18:31:04.294422    5100 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0428 18:31:04.312988    5100 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0428 18:31:04.312988    5100 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0428 18:31:04.312988    5100 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0428 18:31:04.312988    5100 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0428 18:31:04.312988    5100 command_runner.go:130] >  kind: InitConfiguration
	I0428 18:31:04.312988    5100 command_runner.go:130] >  localAPIEndpoint:
	I0428 18:31:04.312988    5100 command_runner.go:130] > -  advertiseAddress: 172.27.231.169
	I0428 18:31:04.312988    5100 command_runner.go:130] > +  advertiseAddress: 172.27.239.170
	I0428 18:31:04.312988    5100 command_runner.go:130] >    bindPort: 8443
	I0428 18:31:04.312988    5100 command_runner.go:130] >  bootstrapTokens:
	I0428 18:31:04.312988    5100 command_runner.go:130] >    - groups:
	I0428 18:31:04.312988    5100 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0428 18:31:04.312988    5100 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0428 18:31:04.312988    5100 command_runner.go:130] >    name: "multinode-788600"
	I0428 18:31:04.312988    5100 command_runner.go:130] >    kubeletExtraArgs:
	I0428 18:31:04.312988    5100 command_runner.go:130] > -    node-ip: 172.27.231.169
	I0428 18:31:04.312988    5100 command_runner.go:130] > +    node-ip: 172.27.239.170
	I0428 18:31:04.312988    5100 command_runner.go:130] >    taints: []
	I0428 18:31:04.312988    5100 command_runner.go:130] >  ---
	I0428 18:31:04.312988    5100 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0428 18:31:04.312988    5100 command_runner.go:130] >  kind: ClusterConfiguration
	I0428 18:31:04.312988    5100 command_runner.go:130] >  apiServer:
	I0428 18:31:04.312988    5100 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.27.231.169"]
	I0428 18:31:04.312988    5100 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.27.239.170"]
	I0428 18:31:04.312988    5100 command_runner.go:130] >    extraArgs:
	I0428 18:31:04.312988    5100 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0428 18:31:04.313995    5100 command_runner.go:130] >  controllerManager:
	I0428 18:31:04.313995    5100 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.27.231.169
	+  advertiseAddress: 172.27.239.170
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-788600"
	   kubeletExtraArgs:
	-    node-ip: 172.27.231.169
	+    node-ip: 172.27.239.170
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.27.231.169"]
	+  certSANs: ["127.0.0.1", "localhost", "172.27.239.170"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0428 18:31:04.313995    5100 kubeadm.go:1154] stopping kube-system containers ...
	I0428 18:31:04.322985    5100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 18:31:04.353225    5100 command_runner.go:130] > 64e6fcf4a3f2
	I0428 18:31:04.353225    5100 command_runner.go:130] > 16ea9b9acd26
	I0428 18:31:04.353225    5100 command_runner.go:130] > 20d6a18478fc
	I0428 18:31:04.353225    5100 command_runner.go:130] > 70af634f6134
	I0428 18:31:04.353225    5100 command_runner.go:130] > 33e59494d8be
	I0428 18:31:04.353225    5100 command_runner.go:130] > 8542b2c39cf5
	I0428 18:31:04.353225    5100 command_runner.go:130] > 776d075f3716
	I0428 18:31:04.353225    5100 command_runner.go:130] > d1342e9d7111
	I0428 18:31:04.353225    5100 command_runner.go:130] > d55fefd692cf
	I0428 18:31:04.353225    5100 command_runner.go:130] > e148c0cdbae0
	I0428 18:31:04.353225    5100 command_runner.go:130] > edb2c636ad5d
	I0428 18:31:04.353225    5100 command_runner.go:130] > 27388b03fb26
	I0428 18:31:04.353225    5100 command_runner.go:130] > 038a267a1caf
	I0428 18:31:04.353225    5100 command_runner.go:130] > 9ffe1b8b41e4
	I0428 18:31:04.353225    5100 command_runner.go:130] > 8328e1b41d78
	I0428 18:31:04.353225    5100 command_runner.go:130] > 26381d4606b5
	I0428 18:31:04.354491    5100 docker.go:483] Stopping containers: [64e6fcf4a3f2 16ea9b9acd26 20d6a18478fc 70af634f6134 33e59494d8be 8542b2c39cf5 776d075f3716 d1342e9d7111 d55fefd692cf e148c0cdbae0 edb2c636ad5d 27388b03fb26 038a267a1caf 9ffe1b8b41e4 8328e1b41d78 26381d4606b5]
	I0428 18:31:04.364390    5100 ssh_runner.go:195] Run: docker stop 64e6fcf4a3f2 16ea9b9acd26 20d6a18478fc 70af634f6134 33e59494d8be 8542b2c39cf5 776d075f3716 d1342e9d7111 d55fefd692cf e148c0cdbae0 edb2c636ad5d 27388b03fb26 038a267a1caf 9ffe1b8b41e4 8328e1b41d78 26381d4606b5
	I0428 18:31:04.397389    5100 command_runner.go:130] > 64e6fcf4a3f2
	I0428 18:31:04.397389    5100 command_runner.go:130] > 16ea9b9acd26
	I0428 18:31:04.397539    5100 command_runner.go:130] > 20d6a18478fc
	I0428 18:31:04.397539    5100 command_runner.go:130] > 70af634f6134
	I0428 18:31:04.397539    5100 command_runner.go:130] > 33e59494d8be
	I0428 18:31:04.397539    5100 command_runner.go:130] > 8542b2c39cf5
	I0428 18:31:04.397539    5100 command_runner.go:130] > 776d075f3716
	I0428 18:31:04.397539    5100 command_runner.go:130] > d1342e9d7111
	I0428 18:31:04.397539    5100 command_runner.go:130] > d55fefd692cf
	I0428 18:31:04.397619    5100 command_runner.go:130] > e148c0cdbae0
	I0428 18:31:04.397619    5100 command_runner.go:130] > edb2c636ad5d
	I0428 18:31:04.397619    5100 command_runner.go:130] > 27388b03fb26
	I0428 18:31:04.397619    5100 command_runner.go:130] > 038a267a1caf
	I0428 18:31:04.397619    5100 command_runner.go:130] > 9ffe1b8b41e4
	I0428 18:31:04.397619    5100 command_runner.go:130] > 8328e1b41d78
	I0428 18:31:04.397619    5100 command_runner.go:130] > 26381d4606b5
	I0428 18:31:04.410385    5100 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0428 18:31:04.456046    5100 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 18:31:04.472006    5100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0428 18:31:04.472006    5100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0428 18:31:04.472993    5100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0428 18:31:04.472993    5100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 18:31:04.472993    5100 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 18:31:04.472993    5100 kubeadm.go:156] found existing configuration files:
	
	I0428 18:31:04.484113    5100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 18:31:04.499059    5100 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 18:31:04.499059    5100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 18:31:04.510719    5100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 18:31:04.543169    5100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 18:31:04.557731    5100 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 18:31:04.558863    5100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 18:31:04.571495    5100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 18:31:04.601871    5100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 18:31:04.617538    5100 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 18:31:04.617538    5100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 18:31:04.633328    5100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 18:31:04.666719    5100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 18:31:04.682759    5100 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 18:31:04.682759    5100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 18:31:04.694102    5100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 18:31:04.724740    5100 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 18:31:04.743715    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:05.046800    5100 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0428 18:31:05.047042    5100 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0428 18:31:05.047042    5100 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0428 18:31:05.047042    5100 command_runner.go:130] > [certs] Using the existing "sa" key
	I0428 18:31:05.047042    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:05.789073    5100 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 18:31:05.789073    5100 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 18:31:05.789073    5100 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 18:31:05.789220    5100 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 18:31:05.789220    5100 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 18:31:05.789220    5100 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 18:31:05.789220    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:06.089406    5100 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 18:31:06.089521    5100 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 18:31:06.089521    5100 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0428 18:31:06.089521    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:06.200973    5100 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 18:31:06.200973    5100 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 18:31:06.200973    5100 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 18:31:06.200973    5100 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 18:31:06.200973    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:06.335221    5100 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 18:31:06.335297    5100 api_server.go:52] waiting for apiserver process to appear ...
	I0428 18:31:06.352189    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:06.860779    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:07.355397    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:07.859488    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:08.350929    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:08.376581    5100 command_runner.go:130] > 1873
	I0428 18:31:08.377248    5100 api_server.go:72] duration metric: took 2.0419465s to wait for apiserver process to appear ...
	I0428 18:31:08.377378    5100 api_server.go:88] waiting for apiserver healthz status ...
	I0428 18:31:08.377378    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:11.562154    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0428 18:31:11.562345    5100 api_server.go:103] status: https://172.27.239.170:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0428 18:31:11.562345    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:11.666889    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0428 18:31:11.667094    5100 api_server.go:103] status: https://172.27.239.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0428 18:31:11.892596    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:11.900932    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0428 18:31:11.900932    5100 api_server.go:103] status: https://172.27.239.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0428 18:31:12.378092    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:12.393638    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0428 18:31:12.393764    5100 api_server.go:103] status: https://172.27.239.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0428 18:31:12.886799    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:12.898497    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0428 18:31:12.898581    5100 api_server.go:103] status: https://172.27.239.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0428 18:31:13.392663    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:13.399821    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 200:
	ok
	I0428 18:31:13.400894    5100 round_trippers.go:463] GET https://172.27.239.170:8443/version
	I0428 18:31:13.400978    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:13.400978    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:13.400978    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:13.412818    5100 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0428 18:31:13.412818    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:13.412818    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:13 GMT
	I0428 18:31:13.412818    5100 round_trippers.go:580]     Audit-Id: b0a79bb7-8b25-46f1-b283-4f71e13e3f94
	I0428 18:31:13.412818    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:13.412818    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:13.412818    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:13.412818    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:13.412818    5100 round_trippers.go:580]     Content-Length: 263
	I0428 18:31:13.412818    5100 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0428 18:31:13.412818    5100 api_server.go:141] control plane version: v1.30.0
	I0428 18:31:13.412818    5100 api_server.go:131] duration metric: took 5.0354284s to wait for apiserver health ...
	I0428 18:31:13.412818    5100 cni.go:84] Creating CNI manager for ""
	I0428 18:31:13.412818    5100 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0428 18:31:13.417869    5100 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0428 18:31:13.436044    5100 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0428 18:31:13.445362    5100 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0428 18:31:13.445362    5100 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0428 18:31:13.445362    5100 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0428 18:31:13.445505    5100 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0428 18:31:13.445505    5100 command_runner.go:130] > Access: 2024-04-29 01:29:43.865545900 +0000
	I0428 18:31:13.445555    5100 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0428 18:31:13.445631    5100 command_runner.go:130] > Change: 2024-04-28 18:29:34.726000000 +0000
	I0428 18:31:13.445631    5100 command_runner.go:130] >  Birth: -
	I0428 18:31:13.445951    5100 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0428 18:31:13.445951    5100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0428 18:31:13.547488    5100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0428 18:31:14.632537    5100 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0428 18:31:14.632691    5100 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0428 18:31:14.632691    5100 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0428 18:31:14.632718    5100 command_runner.go:130] > daemonset.apps/kindnet configured
	I0428 18:31:14.632809    5100 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.0852276s)
	I0428 18:31:14.632965    5100 system_pods.go:43] waiting for kube-system pods to appear ...
	I0428 18:31:14.633166    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:14.633166    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:14.633166    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:14.633166    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:14.639871    5100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:31:14.639871    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:14.640274    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:14.640274    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:14.640274    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:14.640274    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:14 GMT
	I0428 18:31:14.640274    5100 round_trippers.go:580]     Audit-Id: 248bcd12-c9b2-4c03-974b-33681c1e3b65
	I0428 18:31:14.640274    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:14.642794    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1806"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87778 chars]
	I0428 18:31:14.649754    5100 system_pods.go:59] 12 kube-system pods found
	I0428 18:31:14.650290    5100 system_pods.go:61] "coredns-7db6d8ff4d-rp2lx" [d6f6f38d-f1f3-454e-a469-c76c8fbc5d99] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0428 18:31:14.650290    5100 system_pods.go:61] "etcd-multinode-788600" [f87bd4ae-4a5c-4587-a9e8-d381c5b76c63] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0428 18:31:14.650290    5100 system_pods.go:61] "kindnet-52rrh" [49c6b5f0-286f-4bff-b719-d73a4ea4aaf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0428 18:31:14.650290    5100 system_pods.go:61] "kindnet-hnvm4" [d01265be-d3ee-47dc-9d72-fd68a6a6eacd] Running
	I0428 18:31:14.650290    5100 system_pods.go:61] "kindnet-ms872" [9dffcd3e-2cc0-414f-a465-fe37b80ad4bc] Running
	I0428 18:31:14.650290    5100 system_pods.go:61] "kube-apiserver-multinode-788600" [5ade8d95-5387-4444-95af-604116cf695e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0428 18:31:14.650462    5100 system_pods.go:61] "kube-controller-manager-multinode-788600" [b7d7893e-bd95-4f96-879f-a8378040fc03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0428 18:31:14.650646    5100 system_pods.go:61] "kube-proxy-bkkql" [eccd7725-151c-4770-b99c-cb308b31389c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0428 18:31:14.650646    5100 system_pods.go:61] "kube-proxy-kc8c4" [340b4c9b-449f-4208-846e-dec867826bf7] Running
	I0428 18:31:14.650646    5100 system_pods.go:61] "kube-proxy-sjsfc" [f06aadb7-e646-4105-af2f-0acc4a8ad174] Running
	I0428 18:31:14.650646    5100 system_pods.go:61] "kube-scheduler-multinode-788600" [55bd2888-a3b6-498a-9352-8b15bcc5e545] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0428 18:31:14.650646    5100 system_pods.go:61] "storage-provisioner" [04bc447a-c711-4c23-ad4b-db5fd32b28d2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0428 18:31:14.650646    5100 system_pods.go:74] duration metric: took 17.6807ms to wait for pod list to return data ...
	I0428 18:31:14.650646    5100 node_conditions.go:102] verifying NodePressure condition ...
	I0428 18:31:14.650646    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes
	I0428 18:31:14.650646    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:14.650646    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:14.650646    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:14.657389    5100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:31:14.657389    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:14.657389    5100 round_trippers.go:580]     Audit-Id: 537b24cc-1bc6-426b-ba20-af82c6e285ac
	I0428 18:31:14.657389    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:14.657389    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:14.657389    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:14.657389    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:14.657389    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:14 GMT
	I0428 18:31:14.657389    5100 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1806"},"items":[{"metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15630 chars]
	I0428 18:31:14.659404    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:14.659404    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:14.659404    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:14.659404    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:14.659404    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:14.659404    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:14.659404    5100 node_conditions.go:105] duration metric: took 8.7579ms to run NodePressure ...
	I0428 18:31:14.659404    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:15.095181    5100 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0428 18:31:15.095181    5100 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0428 18:31:15.096193    5100 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0428 18:31:15.096193    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0428 18:31:15.096193    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.096193    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.096193    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.136172    5100 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I0428 18:31:15.136172    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.136172    5100 round_trippers.go:580]     Audit-Id: 65742097-3ca7-436d-bc20-f699a73df0d7
	I0428 18:31:15.136172    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.136172    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.136172    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.136172    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.136172    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.138207    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1812"},"items":[{"metadata":{"name":"etcd-multinode-788600","namespace":"kube-system","uid":"f87bd4ae-4a5c-4587-a9e8-d381c5b76c63","resourceVersion":"1757","creationTimestamp":"2024-04-29T01:31:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.239.170:2379","kubernetes.io/config.hash":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.mirror":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.seen":"2024-04-29T01:31:06.337700959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:31:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0428 18:31:15.139771    5100 kubeadm.go:733] kubelet initialised
	I0428 18:31:15.139771    5100 kubeadm.go:734] duration metric: took 43.5779ms waiting for restarted kubelet to initialise ...
	I0428 18:31:15.139771    5100 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 18:31:15.139771    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:15.139771    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.139771    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.139771    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.145356    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:15.145950    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.145950    5100 round_trippers.go:580]     Audit-Id: 459a1c96-348d-496d-84c8-66eff19f8b17
	I0428 18:31:15.145950    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.145950    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.145950    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.145950    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.146022    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.147048    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1812"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87185 chars]
	I0428 18:31:15.149647    5100 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.150653    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:15.150653    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.150653    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.150653    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.153647    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.153647    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.154132    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.154132    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.154132    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.154132    5100 round_trippers.go:580]     Audit-Id: 00fb04df-3abb-4699-8d39-aaed3f0c4562
	I0428 18:31:15.154132    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.154132    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.154369    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:15.154928    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.155000    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.155000    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.155000    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.157642    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.157847    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.157847    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.157847    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.157847    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.157847    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.157847    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.157847    5100 round_trippers.go:580]     Audit-Id: fe9b308f-e86b-4f3b-bb28-83392d7f2e48
	I0428 18:31:15.158186    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:15.158691    5100 pod_ready.go:97] node "multinode-788600" hosting pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.158973    5100 pod_ready.go:81] duration metric: took 9.3258ms for pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:15.158973    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.158973    5100 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.159057    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-788600
	I0428 18:31:15.159127    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.159127    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.159127    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.171183    5100 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0428 18:31:15.171183    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.171183    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.171183    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.171183    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.171183    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.171183    5100 round_trippers.go:580]     Audit-Id: 9e8d3a67-7fc6-44da-a4ab-4c3bf297d313
	I0428 18:31:15.171183    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.171183    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-788600","namespace":"kube-system","uid":"f87bd4ae-4a5c-4587-a9e8-d381c5b76c63","resourceVersion":"1757","creationTimestamp":"2024-04-29T01:31:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.239.170:2379","kubernetes.io/config.hash":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.mirror":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.seen":"2024-04-29T01:31:06.337700959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:31:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0428 18:31:15.171183    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.172154    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.172154    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.172154    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.174165    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.174603    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.174603    5100 round_trippers.go:580]     Audit-Id: 58fceb9c-2f26-4fda-8c21-03ed3aef01a5
	I0428 18:31:15.174603    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.174603    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.174603    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.174603    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.174603    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.175234    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:15.175376    5100 pod_ready.go:97] node "multinode-788600" hosting pod "etcd-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.175376    5100 pod_ready.go:81] duration metric: took 16.403ms for pod "etcd-multinode-788600" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:15.175376    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "etcd-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.175376    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.175376    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-788600
	I0428 18:31:15.175376    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.175376    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.175376    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.177956    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.178891    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.178891    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.178891    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.178891    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.178891    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.178891    5100 round_trippers.go:580]     Audit-Id: cc23e9ad-96dd-439b-a430-a3c689751251
	I0428 18:31:15.179004    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.179113    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-788600","namespace":"kube-system","uid":"5ade8d95-5387-4444-95af-604116cf695e","resourceVersion":"1754","creationTimestamp":"2024-04-29T01:31:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.239.170:8443","kubernetes.io/config.hash":"e1f1ff8c6e0ecb526bd6baa448e7335e","kubernetes.io/config.mirror":"e1f1ff8c6e0ecb526bd6baa448e7335e","kubernetes.io/config.seen":"2024-04-29T01:31:06.268742128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:31:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0428 18:31:15.179786    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.179786    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.179877    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.179877    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.182704    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.182896    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.182896    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.182896    5100 round_trippers.go:580]     Audit-Id: c3ba53e4-8df9-4d4e-bda5-185d6c10f77f
	I0428 18:31:15.182896    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.182896    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.182896    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.182896    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.182896    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:15.183632    5100 pod_ready.go:97] node "multinode-788600" hosting pod "kube-apiserver-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.183632    5100 pod_ready.go:81] duration metric: took 8.2563ms for pod "kube-apiserver-multinode-788600" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:15.183632    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "kube-apiserver-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.183632    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.183820    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:15.183820    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.183820    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.183820    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.186501    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.186501    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.186939    5100 round_trippers.go:580]     Audit-Id: 99893935-fb21-420c-9cff-c20de7ccb907
	I0428 18:31:15.186939    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.186939    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.186939    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.186939    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.186939    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.187313    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:15.188091    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.188091    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.188091    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.188091    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.190500    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.190500    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.190500    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.190500    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.190500    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.190500    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.190500    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.190500    5100 round_trippers.go:580]     Audit-Id: 7f56dd45-7d68-462a-a53e-5a85e89ccc57
	I0428 18:31:15.190500    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:15.191494    5100 pod_ready.go:97] node "multinode-788600" hosting pod "kube-controller-manager-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.191494    5100 pod_ready.go:81] duration metric: took 7.7784ms for pod "kube-controller-manager-multinode-788600" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:15.191494    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "kube-controller-manager-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.191494    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bkkql" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.306676    5100 request.go:629] Waited for 114.7847ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bkkql
	I0428 18:31:15.306676    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bkkql
	I0428 18:31:15.306676    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.306676    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.306676    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.310457    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:15.311284    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.311284    5100 round_trippers.go:580]     Audit-Id: 103130c2-ca49-4b4a-92e6-5d0ccc0d6407
	I0428 18:31:15.311284    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.311284    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.311284    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.311284    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.311284    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.311284    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bkkql","generateName":"kube-proxy-","namespace":"kube-system","uid":"eccd7725-151c-4770-b99c-cb308b31389c","resourceVersion":"1811","creationTimestamp":"2024-04-29T01:09:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0428 18:31:15.508336    5100 request.go:629] Waited for 195.9795ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.508605    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.508605    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.508651    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.508667    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.512169    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:15.512169    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.512169    5100 round_trippers.go:580]     Audit-Id: 6961d0a4-358e-4e41-aa67-2f2730d6f3ff
	I0428 18:31:15.512169    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.512169    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.512464    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.512464    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.512464    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.512718    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:15.513623    5100 pod_ready.go:97] node "multinode-788600" hosting pod "kube-proxy-bkkql" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.513623    5100 pod_ready.go:81] duration metric: took 322.1279ms for pod "kube-proxy-bkkql" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:15.513623    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "kube-proxy-bkkql" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.513623    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kc8c4" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.696409    5100 request.go:629] Waited for 182.6614ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kc8c4
	I0428 18:31:15.696609    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kc8c4
	I0428 18:31:15.696609    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.696609    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.696609    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.700367    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:15.700367    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.701342    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.701342    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.701342    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.701342    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.701342    5100 round_trippers.go:580]     Audit-Id: 20e27f84-22b7-47b4-a097-76936ffa5a07
	I0428 18:31:15.701342    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.701658    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kc8c4","generateName":"kube-proxy-","namespace":"kube-system","uid":"340b4c9b-449f-4208-846e-dec867826bf7","resourceVersion":"625","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0428 18:31:15.900703    5100 request.go:629] Waited for 198.0923ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:31:15.900822    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:31:15.900822    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.900822    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.900822    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.909119    5100 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0428 18:31:15.909119    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.909119    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.909119    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.909119    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.909119    5100 round_trippers.go:580]     Audit-Id: d0c1002e-a1b6-497f-892e-ddd3c4c172ec
	I0428 18:31:15.909119    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.909119    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.909119    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"1353","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3827 chars]
	I0428 18:31:15.910040    5100 pod_ready.go:92] pod "kube-proxy-kc8c4" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:15.910040    5100 pod_ready.go:81] duration metric: took 396.4162ms for pod "kube-proxy-kc8c4" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.910040    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sjsfc" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:16.102000    5100 request.go:629] Waited for 191.7654ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjsfc
	I0428 18:31:16.102255    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjsfc
	I0428 18:31:16.102255    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:16.102255    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:16.102255    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:16.105969    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:16.107006    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:16.107038    5100 round_trippers.go:580]     Audit-Id: 855ecca8-d4e6-430b-aa3c-4558037042ca
	I0428 18:31:16.107038    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:16.107038    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:16.107038    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:16.107038    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:16.107038    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:16 GMT
	I0428 18:31:16.107379    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sjsfc","generateName":"kube-proxy-","namespace":"kube-system","uid":"f06aadb7-e646-4105-af2f-0acc4a8ad174","resourceVersion":"1698","creationTimestamp":"2024-04-29T01:16:25Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:16:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0428 18:31:16.306385    5100 request.go:629] Waited for 198.1483ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m03
	I0428 18:31:16.306425    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m03
	I0428 18:31:16.306425    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:16.306425    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:16.306425    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:16.310172    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:16.311023    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:16.311023    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:16.311023    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:16.311023    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:16.311023    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:16.311023    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:16 GMT
	I0428 18:31:16.311096    5100 round_trippers.go:580]     Audit-Id: 0f268a7f-8c37-4653-86df-96846cc991d3
	I0428 18:31:16.311337    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m03","uid":"d68977ad-af85-4957-85dc-4ad584113d26","resourceVersion":"1709","creationTimestamp":"2024-04-29T01:26:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_26_47_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:26:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0428 18:31:16.311937    5100 pod_ready.go:97] node "multinode-788600-m03" hosting pod "kube-proxy-sjsfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600-m03" has status "Ready":"Unknown"
	I0428 18:31:16.311937    5100 pod_ready.go:81] duration metric: took 401.8965ms for pod "kube-proxy-sjsfc" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:16.311937    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600-m03" hosting pod "kube-proxy-sjsfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600-m03" has status "Ready":"Unknown"
	I0428 18:31:16.311937    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:16.509495    5100 request.go:629] Waited for 197.318ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-788600
	I0428 18:31:16.509644    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-788600
	I0428 18:31:16.509724    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:16.509724    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:16.509762    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:16.512765    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:16.513186    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:16.513186    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:16.513186    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:16.513186    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:16 GMT
	I0428 18:31:16.513186    5100 round_trippers.go:580]     Audit-Id: 43c41b94-99b3-45b3-823c-f7e75c2eefbe
	I0428 18:31:16.513186    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:16.513186    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:16.513458    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-788600","namespace":"kube-system","uid":"55bd2888-a3b6-498a-9352-8b15bcc5e545","resourceVersion":"1769","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"efaaef68fa82e79d9434de065e5d40a6","kubernetes.io/config.mirror":"efaaef68fa82e79d9434de065e5d40a6","kubernetes.io/config.seen":"2024-04-29T01:08:48.885071033Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0428 18:31:16.700515    5100 request.go:629] Waited for 186.1649ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:16.700515    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:16.700515    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:16.700515    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:16.700515    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:16.704023    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:16.705037    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:16.705077    5100 round_trippers.go:580]     Audit-Id: ee4d4b15-72df-4e5c-86f4-5490ccc9a289
	I0428 18:31:16.705077    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:16.705077    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:16.705077    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:16.705077    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:16.705077    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:16 GMT
	I0428 18:31:16.705222    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:16.705767    5100 pod_ready.go:97] node "multinode-788600" hosting pod "kube-scheduler-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:16.705924    5100 pod_ready.go:81] duration metric: took 393.9853ms for pod "kube-scheduler-multinode-788600" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:16.705924    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "kube-scheduler-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:16.705924    5100 pod_ready.go:38] duration metric: took 1.566149s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 18:31:16.705924    5100 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 18:31:16.724721    5100 command_runner.go:130] > -16
	I0428 18:31:16.725018    5100 ops.go:34] apiserver oom_adj: -16
	I0428 18:31:16.725018    5100 kubeadm.go:591] duration metric: took 12.4909983s to restartPrimaryControlPlane
	I0428 18:31:16.725018    5100 kubeadm.go:393] duration metric: took 12.5567953s to StartCluster
	I0428 18:31:16.725018    5100 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:16.725018    5100 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:31:16.726568    5100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:16.727966    5100 start.go:234] Will wait 6m0s for node &{Name: IP:172.27.239.170 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 18:31:16.727966    5100 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0428 18:31:16.732826    5100 out.go:177] * Verifying Kubernetes components...
	I0428 18:31:16.728603    5100 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:31:16.737476    5100 out.go:177] * Enabled addons: 
	I0428 18:31:16.742152    5100 addons.go:505] duration metric: took 14.1858ms for enable addons: enabled=[]
	I0428 18:31:16.751296    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:31:17.008730    5100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 18:31:17.039776    5100 node_ready.go:35] waiting up to 6m0s for node "multinode-788600" to be "Ready" ...
	I0428 18:31:17.040103    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:17.040103    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.040146    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.040172    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.043764    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:17.043764    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.043764    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.043764    5100 round_trippers.go:580]     Audit-Id: a8273f55-9742-4a3a-93b9-eca47c09292d
	I0428 18:31:17.043764    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.043764    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.043764    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.043764    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.044784    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:17.044784    5100 node_ready.go:49] node "multinode-788600" has status "Ready":"True"
	I0428 18:31:17.044784    5100 node_ready.go:38] duration metric: took 4.9181ms for node "multinode-788600" to be "Ready" ...
	I0428 18:31:17.044784    5100 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 18:31:17.109075    5100 request.go:629] Waited for 64.0491ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:17.109310    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:17.109310    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.109310    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.109310    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.114919    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:17.115371    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.115427    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.115427    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.115427    5100 round_trippers.go:580]     Audit-Id: 006c7d51-eccd-4506-a698-005b0daa1d0b
	I0428 18:31:17.115427    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.115427    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.115427    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.116826    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1817"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87185 chars]
	I0428 18:31:17.120579    5100 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:17.297742    5100 request.go:629] Waited for 177.1623ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:17.297742    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:17.297742    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.297742    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.297742    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.301521    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:17.301521    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.301521    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.301521    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.301521    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.301521    5100 round_trippers.go:580]     Audit-Id: 8e66d5c6-ec9a-4aa3-9b06-d540afe60889
	I0428 18:31:17.301521    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.301521    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.302710    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:17.499470    5100 request.go:629] Waited for 195.8663ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:17.499470    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:17.499470    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.499470    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.499470    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.503650    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:17.503650    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.503650    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.503650    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.503755    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.503755    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.503755    5100 round_trippers.go:580]     Audit-Id: 81db7e77-99aa-4860-9e04-b6ee3d7ee5e6
	I0428 18:31:17.503755    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.504045    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:17.703045    5100 request.go:629] Waited for 78.0265ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:17.703158    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:17.703158    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.703158    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.703158    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.708829    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:17.709368    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.709443    5100 round_trippers.go:580]     Audit-Id: 5590ba60-674b-44c2-82f1-0b5501385170
	I0428 18:31:17.709443    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.709443    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.709443    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.709443    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.709443    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.709717    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:17.907071    5100 request.go:629] Waited for 196.8197ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:17.907260    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:17.907260    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.907260    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.907260    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.912062    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:17.912062    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.912062    5100 round_trippers.go:580]     Audit-Id: 7eb648bb-2c0e-4586-8efc-8ed163da53ce
	I0428 18:31:17.912062    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.912062    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.912062    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.912062    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.912062    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.912062    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:18.125074    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:18.125176    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:18.125176    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:18.125176    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:18.130106    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:18.130391    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:18.130455    5100 round_trippers.go:580]     Audit-Id: 16906fd6-6d66-4bc7-9365-56443fcce4da
	I0428 18:31:18.130455    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:18.130455    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:18.130455    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:18.130455    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:18.130455    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:18 GMT
	I0428 18:31:18.130455    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:18.297115    5100 request.go:629] Waited for 165.6205ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:18.297115    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:18.297115    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:18.297115    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:18.297115    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:18.301050    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:18.301050    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:18.301050    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:18.301050    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:18.301771    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:18.301771    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:18.301771    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:18 GMT
	I0428 18:31:18.301771    5100 round_trippers.go:580]     Audit-Id: 493da01f-28a4-469a-b479-0e5c634dcda6
	I0428 18:31:18.302106    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:18.623750    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:18.623750    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:18.623884    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:18.623884    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:18.627295    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:18.627295    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:18.627295    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:18.628185    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:18 GMT
	I0428 18:31:18.628185    5100 round_trippers.go:580]     Audit-Id: 0158c0f7-3b76-4cc8-88e6-20a75e3a14a6
	I0428 18:31:18.628185    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:18.628185    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:18.628185    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:18.628287    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:18.701220    5100 request.go:629] Waited for 71.8291ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:18.701447    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:18.701447    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:18.701447    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:18.701447    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:18.705727    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:18.706655    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:18.706655    5100 round_trippers.go:580]     Audit-Id: 857798b2-ed0f-4456-ac6b-802e8e992d5a
	I0428 18:31:18.706655    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:18.706716    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:18.706716    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:18.706716    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:18.706716    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:18 GMT
	I0428 18:31:18.707322    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:19.125144    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:19.125458    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:19.125458    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:19.125458    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:19.129851    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:19.129851    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:19.130436    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:19.130436    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:19 GMT
	I0428 18:31:19.130436    5100 round_trippers.go:580]     Audit-Id: e680eb2e-fdde-4f45-8785-96cc96451ae4
	I0428 18:31:19.130436    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:19.130436    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:19.130436    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:19.130645    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:19.131464    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:19.131464    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:19.131464    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:19.131539    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:19.135413    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:19.135592    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:19.135592    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:19.135592    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:19 GMT
	I0428 18:31:19.135592    5100 round_trippers.go:580]     Audit-Id: 94b7ce89-e9f4-4224-84b3-b2a746aed8d9
	I0428 18:31:19.135592    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:19.135592    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:19.135592    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:19.136057    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:19.136636    5100 pod_ready.go:102] pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace has status "Ready":"False"
	I0428 18:31:19.625365    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:19.625365    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:19.625365    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:19.625365    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:19.629585    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:19.630350    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:19.630350    5100 round_trippers.go:580]     Audit-Id: 9a54d882-18e4-412a-95e9-2944c7341b61
	I0428 18:31:19.630350    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:19.630350    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:19.630350    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:19.630350    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:19.630350    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:19 GMT
	I0428 18:31:19.631010    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:19.631732    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:19.631732    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:19.631732    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:19.631732    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:19.634764    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:19.635282    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:19.635282    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:19.635282    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:19.635282    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:19.635282    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:19 GMT
	I0428 18:31:19.635282    5100 round_trippers.go:580]     Audit-Id: ce01b556-8310-4cd0-97b1-00048e3ce5ef
	I0428 18:31:19.635367    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:19.635644    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:20.125337    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:20.125563    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:20.125563    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:20.125563    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:20.130243    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:20.130243    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:20.130340    5100 round_trippers.go:580]     Audit-Id: d66e8822-e755-4521-8c73-cf13c831f445
	I0428 18:31:20.130340    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:20.130340    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:20.130340    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:20.130340    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:20.130340    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:20 GMT
	I0428 18:31:20.130550    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:20.131365    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:20.131365    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:20.131365    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:20.131365    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:20.135405    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:20.135608    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:20.135608    5100 round_trippers.go:580]     Audit-Id: 45cd6471-74c5-4493-b702-d89fd8d35e5d
	I0428 18:31:20.135608    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:20.135608    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:20.135608    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:20.135608    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:20.135608    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:20 GMT
	I0428 18:31:20.136101    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:20.634410    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:20.634488    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:20.634488    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:20.634557    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:20.637052    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:20.637426    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:20.637541    5100 round_trippers.go:580]     Audit-Id: ab177460-eb95-46f5-a35e-f25819254aeb
	I0428 18:31:20.637541    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:20.637541    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:20.637541    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:20.637541    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:20.637541    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:20 GMT
	I0428 18:31:20.637794    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:20.638636    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:20.638636    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:20.638636    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:20.638695    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:20.641492    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:20.641556    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:20.641556    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:20 GMT
	I0428 18:31:20.641556    5100 round_trippers.go:580]     Audit-Id: ed017748-8a58-4062-9bb8-e81c00b3cba6
	I0428 18:31:20.641556    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:20.641624    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:20.641624    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:20.641624    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:20.641935    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:21.127928    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:21.127928    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.127928    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.127928    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.132962    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:21.133357    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.133357    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.133357    5100 round_trippers.go:580]     Audit-Id: 1230feb4-c38f-4839-9a95-4f3d25a63a95
	I0428 18:31:21.133430    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.133430    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.133430    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.133430    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.133643    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:21.134444    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:21.134444    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.134444    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.134444    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.140109    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:21.140391    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.140391    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.140492    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.140492    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.140492    5100 round_trippers.go:580]     Audit-Id: 0db692a7-5837-417e-8d92-b8c244e93eee
	I0428 18:31:21.140492    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.140492    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.140806    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:21.141367    5100 pod_ready.go:102] pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace has status "Ready":"False"
	I0428 18:31:21.633646    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:21.633743    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.633743    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.633743    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.637104    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:21.638230    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.638230    5100 round_trippers.go:580]     Audit-Id: f68ff9c4-1dfd-405f-a796-cc57177a2633
	I0428 18:31:21.638230    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.638230    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.638230    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.638230    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.638230    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.638622    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1831","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0428 18:31:21.639344    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:21.639415    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.639415    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.639415    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.642703    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:21.642882    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.642882    5100 round_trippers.go:580]     Audit-Id: 156045d7-ea62-439c-a5a2-764198fcf8fc
	I0428 18:31:21.642882    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.642882    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.642882    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.642882    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.642882    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.643283    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:21.643782    5100 pod_ready.go:92] pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:21.643853    5100 pod_ready.go:81] duration metric: took 4.5231918s for pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:21.643853    5100 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:21.644054    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-788600
	I0428 18:31:21.644110    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.644110    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.644110    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.646053    5100 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0428 18:31:21.646894    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.646894    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.646894    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.646894    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.646894    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.646894    5100 round_trippers.go:580]     Audit-Id: f35af6a5-cb54-4f3a-a859-d4268c14877e
	I0428 18:31:21.646894    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.647187    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-788600","namespace":"kube-system","uid":"f87bd4ae-4a5c-4587-a9e8-d381c5b76c63","resourceVersion":"1828","creationTimestamp":"2024-04-29T01:31:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.239.170:2379","kubernetes.io/config.hash":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.mirror":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.seen":"2024-04-29T01:31:06.337700959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:31:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0428 18:31:21.647739    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:21.647739    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.647739    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.647739    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.650311    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:21.650311    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.650311    5100 round_trippers.go:580]     Audit-Id: 2caa71c7-c1b8-47dc-9700-df9b0410bb56
	I0428 18:31:21.650311    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.650311    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.650502    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.650502    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.650502    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.650685    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:21.650685    5100 pod_ready.go:92] pod "etcd-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:21.650685    5100 pod_ready.go:81] duration metric: took 6.8321ms for pod "etcd-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:21.650685    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:21.710990    5100 request.go:629] Waited for 60.172ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-788600
	I0428 18:31:21.711066    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-788600
	I0428 18:31:21.711066    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.711066    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.711066    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.714561    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:21.714561    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.715037    5100 round_trippers.go:580]     Audit-Id: fc8e88b9-66f9-4898-9ff1-4315cda3ab66
	I0428 18:31:21.715037    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.715037    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.715037    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.715037    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.715037    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.715299    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-788600","namespace":"kube-system","uid":"5ade8d95-5387-4444-95af-604116cf695e","resourceVersion":"1819","creationTimestamp":"2024-04-29T01:31:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.239.170:8443","kubernetes.io/config.hash":"e1f1ff8c6e0ecb526bd6baa448e7335e","kubernetes.io/config.mirror":"e1f1ff8c6e0ecb526bd6baa448e7335e","kubernetes.io/config.seen":"2024-04-29T01:31:06.268742128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:31:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0428 18:31:21.897294    5100 request.go:629] Waited for 181.1138ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:21.897451    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:21.897451    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.897451    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.897451    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.902008    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:21.902330    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.902330    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.902405    5100 round_trippers.go:580]     Audit-Id: 455e52d6-9783-4cd0-ba22-d7ced6bdbde5
	I0428 18:31:21.902474    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.902513    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.902513    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.902563    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.902731    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:21.903336    5100 pod_ready.go:92] pod "kube-apiserver-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:21.903336    5100 pod_ready.go:81] duration metric: took 252.6502ms for pod "kube-apiserver-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:21.903390    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:22.101010    5100 request.go:629] Waited for 197.3159ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:22.101123    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:22.101329    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:22.101329    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:22.101329    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:22.105803    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:22.105803    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:22.105803    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:22.105803    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:22 GMT
	I0428 18:31:22.106752    5100 round_trippers.go:580]     Audit-Id: 2718f490-3370-4fab-81d1-075ce51d9a4b
	I0428 18:31:22.106752    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:22.106752    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:22.106752    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:22.107214    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:22.302267    5100 request.go:629] Waited for 194.1916ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:22.302870    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:22.302870    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:22.302870    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:22.302870    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:22.306443    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:22.307139    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:22.307139    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:22 GMT
	I0428 18:31:22.307139    5100 round_trippers.go:580]     Audit-Id: 1859c6bd-dd6f-46f3-8023-86dfbf522bb5
	I0428 18:31:22.307139    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:22.307139    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:22.307139    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:22.307139    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:22.307433    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:22.503587    5100 request.go:629] Waited for 93.5627ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:22.503911    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:22.503911    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:22.503911    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:22.503911    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:22.508599    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:22.508599    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:22.508599    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:22.508599    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:22.508599    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:22.508599    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:22 GMT
	I0428 18:31:22.508599    5100 round_trippers.go:580]     Audit-Id: 69d38582-07ce-450b-9982-677772a19f0f
	I0428 18:31:22.508599    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:22.508599    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:22.705855    5100 request.go:629] Waited for 196.1165ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:22.706020    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:22.706020    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:22.706020    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:22.706020    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:22.710776    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:22.710776    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:22.710776    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:22 GMT
	I0428 18:31:22.710776    5100 round_trippers.go:580]     Audit-Id: b319da81-14dd-4a76-b77b-5cad9a9f0cdd
	I0428 18:31:22.710776    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:22.710776    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:22.710776    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:22.710776    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:22.711099    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:22.909509    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:22.909509    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:22.909509    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:22.909509    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:22.913239    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:22.913239    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:22.914065    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:22 GMT
	I0428 18:31:22.914065    5100 round_trippers.go:580]     Audit-Id: 90797ee4-eb66-443a-bee2-91e3160ae5a3
	I0428 18:31:22.914065    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:22.914152    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:22.914197    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:22.914197    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:22.914394    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:23.096951    5100 request.go:629] Waited for 181.6718ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:23.097189    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:23.097189    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:23.097189    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:23.097189    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:23.103361    5100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:31:23.103791    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:23.103791    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:23.103791    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:23.103791    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:23.103791    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:23 GMT
	I0428 18:31:23.103791    5100 round_trippers.go:580]     Audit-Id: 195353fb-71f8-4541-826a-8108aaac1962
	I0428 18:31:23.103791    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:23.104000    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:23.410524    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:23.410524    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:23.410524    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:23.410524    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:23.418485    5100 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0428 18:31:23.418637    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:23.418637    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:23.418637    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:23.418637    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:23.418637    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:23.418637    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:23 GMT
	I0428 18:31:23.418723    5100 round_trippers.go:580]     Audit-Id: a9965ad6-304f-4265-b0f7-4574d439bc5e
	I0428 18:31:23.418987    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:23.504617    5100 request.go:629] Waited for 84.6283ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:23.504868    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:23.504868    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:23.504908    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:23.504908    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:23.512339    5100 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0428 18:31:23.512339    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:23.512339    5100 round_trippers.go:580]     Audit-Id: 44c42d96-1347-4a1d-bb98-6efab260b0a9
	I0428 18:31:23.512339    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:23.512339    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:23.512339    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:23.512339    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:23.512339    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:23 GMT
	I0428 18:31:23.512948    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:23.912694    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:23.912694    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:23.912694    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:23.912694    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:23.916280    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:23.917051    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:23.917051    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:23.917051    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:23.917051    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:23 GMT
	I0428 18:31:23.917169    5100 round_trippers.go:580]     Audit-Id: e56e8589-fd0b-4a10-8978-88a5498adf87
	I0428 18:31:23.917169    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:23.917255    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:23.917386    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:23.918466    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:23.918466    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:23.918466    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:23.918545    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:23.920990    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:23.920990    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:23.921364    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:23.921364    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:23.921364    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:23.921364    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:23.921364    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:23 GMT
	I0428 18:31:23.921364    5100 round_trippers.go:580]     Audit-Id: 459d79fa-7fd5-458c-b59b-4aa09ca2d11f
	I0428 18:31:23.921619    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:23.921844    5100 pod_ready.go:102] pod "kube-controller-manager-multinode-788600" in "kube-system" namespace has status "Ready":"False"
	I0428 18:31:24.403813    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:24.403813    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:24.403898    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:24.403898    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:24.407347    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:24.407347    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:24.407880    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:24.407880    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:24.407880    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:24.407880    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:24 GMT
	I0428 18:31:24.407880    5100 round_trippers.go:580]     Audit-Id: 6c51501b-33a9-4f17-83a5-0d289e64f234
	I0428 18:31:24.407880    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:24.408280    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:24.409107    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:24.409107    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:24.409107    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:24.409107    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:24.418873    5100 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0428 18:31:24.418999    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:24.418999    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:24.418999    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:24.418999    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:24.418999    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:24 GMT
	I0428 18:31:24.418999    5100 round_trippers.go:580]     Audit-Id: c65fc721-9bdd-425f-884a-ac4fc9762dac
	I0428 18:31:24.418999    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:24.418999    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:24.907990    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:24.907990    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:24.907990    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:24.907990    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:24.911050    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:24.911818    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:24.911818    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:24.911818    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:24.911818    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:24 GMT
	I0428 18:31:24.911818    5100 round_trippers.go:580]     Audit-Id: b319d2c2-62a5-4196-b683-3941c10aa59c
	I0428 18:31:24.911818    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:24.911818    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:24.912137    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:24.912842    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:24.912842    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:24.912842    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:24.912842    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:24.915423    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:24.915423    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:24.915997    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:24 GMT
	I0428 18:31:24.915997    5100 round_trippers.go:580]     Audit-Id: 84f5841e-e7ee-45e3-a703-0f959c7f358a
	I0428 18:31:24.915997    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:24.915997    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:24.915997    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:24.915997    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:24.916211    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:25.406479    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:25.406479    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:25.406479    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:25.406479    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:25.410068    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:25.411003    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:25.411003    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:25.411003    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:25 GMT
	I0428 18:31:25.411003    5100 round_trippers.go:580]     Audit-Id: 65af6da8-cf58-4415-9bd1-78eb11064ed9
	I0428 18:31:25.411003    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:25.411003    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:25.411085    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:25.411437    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:25.412086    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:25.412086    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:25.412086    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:25.412086    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:25.416108    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:25.416108    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:25.416108    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:25.416108    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:25.416108    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:25.416108    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:25.416349    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:25 GMT
	I0428 18:31:25.416349    5100 round_trippers.go:580]     Audit-Id: 85a041c9-f007-4e8d-a7e5-2d480a07a6f2
	I0428 18:31:25.416451    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:25.905969    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:25.906041    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:25.906041    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:25.906041    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:25.910420    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:25.910753    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:25.910753    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:25.910753    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:25 GMT
	I0428 18:31:25.910753    5100 round_trippers.go:580]     Audit-Id: f26b3776-3168-481a-a906-dc87ef8303f5
	I0428 18:31:25.910753    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:25.910753    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:25.910753    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:25.911278    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:25.912093    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:25.912171    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:25.912243    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:25.912280    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:25.916509    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:25.916564    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:25.916564    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:25.916606    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:25.916606    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:25.916606    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:25 GMT
	I0428 18:31:25.916606    5100 round_trippers.go:580]     Audit-Id: 4e2b371e-42dc-4d12-9f9d-0c0566f49f31
	I0428 18:31:25.916652    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:25.917158    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:26.406983    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:26.407082    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.407082    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.407082    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.411527    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:26.411527    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.412073    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.412073    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.412073    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.412073    5100 round_trippers.go:580]     Audit-Id: b9d642b0-29ca-47a0-af35-12fa93ac8141
	I0428 18:31:26.412073    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.412073    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.412518    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:26.413377    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:26.413377    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.413469    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.413469    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.416937    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:26.416937    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.417604    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.417604    5100 round_trippers.go:580]     Audit-Id: 5bde9dee-4272-4b16-9ef7-cef4f1306ca7
	I0428 18:31:26.417604    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.417604    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.417604    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.417604    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.417907    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:26.418633    5100 pod_ready.go:102] pod "kube-controller-manager-multinode-788600" in "kube-system" namespace has status "Ready":"False"
	I0428 18:31:26.910803    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:26.910803    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.910803    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.910803    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.914461    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:26.915082    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.915082    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.915082    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.915082    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.915082    5100 round_trippers.go:580]     Audit-Id: cf8512b6-0c9a-49e4-b462-11a9c7c0186e
	I0428 18:31:26.915082    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.915082    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.915465    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1845","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0428 18:31:26.916199    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:26.916253    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.916253    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.916253    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.919831    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:26.919831    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.919831    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.919831    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.919831    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.919831    5100 round_trippers.go:580]     Audit-Id: eb964c52-b7a1-4dce-84d1-d5ced6289e32
	I0428 18:31:26.919831    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.919831    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.919831    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:26.920847    5100 pod_ready.go:92] pod "kube-controller-manager-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:26.920847    5100 pod_ready.go:81] duration metric: took 5.0174446s for pod "kube-controller-manager-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:26.920847    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bkkql" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:26.920847    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bkkql
	I0428 18:31:26.920847    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.920847    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.920847    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.923862    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:26.923991    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.923991    5100 round_trippers.go:580]     Audit-Id: 34326d60-61eb-4e29-9e55-3265edff4448
	I0428 18:31:26.923991    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.923991    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.923991    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.923991    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.923991    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.924328    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bkkql","generateName":"kube-proxy-","namespace":"kube-system","uid":"eccd7725-151c-4770-b99c-cb308b31389c","resourceVersion":"1811","creationTimestamp":"2024-04-29T01:09:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0428 18:31:26.925059    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:26.925157    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.925157    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.925157    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.929745    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:26.930529    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.930529    5100 round_trippers.go:580]     Audit-Id: 21fcb88b-b68a-4e51-b75f-79f6bbbc4901
	I0428 18:31:26.930529    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.930529    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.930529    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.930529    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.930529    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.930529    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:26.930529    5100 pod_ready.go:92] pod "kube-proxy-bkkql" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:26.930529    5100 pod_ready.go:81] duration metric: took 9.6822ms for pod "kube-proxy-bkkql" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:26.930529    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kc8c4" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:26.930529    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kc8c4
	I0428 18:31:26.930529    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.930529    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.930529    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.933549    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:26.933549    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.933549    5100 round_trippers.go:580]     Audit-Id: bafcc134-e6f0-426a-a801-c20dfa8ae175
	I0428 18:31:26.933549    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.933549    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.933549    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.933549    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.933549    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.933549    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kc8c4","generateName":"kube-proxy-","namespace":"kube-system","uid":"340b4c9b-449f-4208-846e-dec867826bf7","resourceVersion":"625","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0428 18:31:27.098538    5100 request.go:629] Waited for 163.8061ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:31:27.098710    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:31:27.098710    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.098710    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.098710    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.102441    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:27.102441    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.103445    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.103445    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.103445    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.103445    5100 round_trippers.go:580]     Audit-Id: fb5898ce-a6b8-4a4a-b6d5-31ad26eecf80
	I0428 18:31:27.103445    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.103520    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.105457    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"1353","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3827 chars]
	I0428 18:31:27.105457    5100 pod_ready.go:92] pod "kube-proxy-kc8c4" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:27.105457    5100 pod_ready.go:81] duration metric: took 174.9279ms for pod "kube-proxy-kc8c4" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:27.105457    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sjsfc" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:27.301745    5100 request.go:629] Waited for 195.5395ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjsfc
	I0428 18:31:27.302056    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjsfc
	I0428 18:31:27.302056    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.302056    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.302056    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.307781    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:27.307860    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.307860    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.307860    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.307926    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.307926    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.307952    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.307952    5100 round_trippers.go:580]     Audit-Id: 7efa1919-f143-4c8f-b032-2b86afdfc5a3
	I0428 18:31:27.307981    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sjsfc","generateName":"kube-proxy-","namespace":"kube-system","uid":"f06aadb7-e646-4105-af2f-0acc4a8ad174","resourceVersion":"1698","creationTimestamp":"2024-04-29T01:16:25Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:16:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0428 18:31:27.502902    5100 request.go:629] Waited for 193.7858ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m03
	I0428 18:31:27.503060    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m03
	I0428 18:31:27.503060    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.503060    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.503060    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.506683    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:27.507255    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.507255    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.507255    5100 round_trippers.go:580]     Audit-Id: 9d27db1a-1bf1-43d7-9ff4-dca89bead646
	I0428 18:31:27.507255    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.507255    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.507255    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.507255    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.507493    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m03","uid":"d68977ad-af85-4957-85dc-4ad584113d26","resourceVersion":"1842","creationTimestamp":"2024-04-29T01:26:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_26_47_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:26:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0428 18:31:27.508040    5100 pod_ready.go:97] node "multinode-788600-m03" hosting pod "kube-proxy-sjsfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600-m03" has status "Ready":"Unknown"
	I0428 18:31:27.508183    5100 pod_ready.go:81] duration metric: took 402.6814ms for pod "kube-proxy-sjsfc" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:27.508199    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600-m03" hosting pod "kube-proxy-sjsfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600-m03" has status "Ready":"Unknown"
	I0428 18:31:27.508199    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:27.706822    5100 request.go:629] Waited for 198.3375ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-788600
	I0428 18:31:27.707038    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-788600
	I0428 18:31:27.707038    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.707038    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.707038    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.710618    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:27.710618    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.710618    5100 round_trippers.go:580]     Audit-Id: 346dffd5-6ed0-444b-982a-bdfbd2984a5d
	I0428 18:31:27.710785    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.710785    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.710785    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.710785    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.710785    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.710965    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-788600","namespace":"kube-system","uid":"55bd2888-a3b6-498a-9352-8b15bcc5e545","resourceVersion":"1834","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"efaaef68fa82e79d9434de065e5d40a6","kubernetes.io/config.mirror":"efaaef68fa82e79d9434de065e5d40a6","kubernetes.io/config.seen":"2024-04-29T01:08:48.885071033Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0428 18:31:27.909797    5100 request.go:629] Waited for 197.525ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:27.910028    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:27.910109    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.910109    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.910109    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.914589    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:27.914589    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.914589    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.914589    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.914589    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.914668    5100 round_trippers.go:580]     Audit-Id: 9b5cf9aa-ca13-4191-8718-7bcc2058694f
	I0428 18:31:27.914668    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.914668    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.914843    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:27.915494    5100 pod_ready.go:92] pod "kube-scheduler-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:27.915494    5100 pod_ready.go:81] duration metric: took 407.2947ms for pod "kube-scheduler-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:27.915494    5100 pod_ready.go:38] duration metric: took 10.8706849s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 18:31:27.915494    5100 api_server.go:52] waiting for apiserver process to appear ...
	I0428 18:31:27.928493    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:27.958046    5100 command_runner.go:130] > 1873
	I0428 18:31:27.958165    5100 api_server.go:72] duration metric: took 11.2301726s to wait for apiserver process to appear ...
	I0428 18:31:27.958165    5100 api_server.go:88] waiting for apiserver healthz status ...
	I0428 18:31:27.958239    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:27.966618    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 200:
	ok
	I0428 18:31:27.967716    5100 round_trippers.go:463] GET https://172.27.239.170:8443/version
	I0428 18:31:27.967756    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.967798    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.967798    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.970713    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:27.970929    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.970929    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.970929    5100 round_trippers.go:580]     Content-Length: 263
	I0428 18:31:27.970929    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.970929    5100 round_trippers.go:580]     Audit-Id: d08eef7e-51d9-480d-801f-83d53e5365c3
	I0428 18:31:27.970929    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.970929    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.971026    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.971026    5100 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0428 18:31:27.971163    5100 api_server.go:141] control plane version: v1.30.0
	I0428 18:31:27.971195    5100 api_server.go:131] duration metric: took 12.9561ms to wait for apiserver health ...
	I0428 18:31:27.971195    5100 system_pods.go:43] waiting for kube-system pods to appear ...
	I0428 18:31:28.110224    5100 request.go:629] Waited for 138.7183ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:28.110224    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:28.110224    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:28.110224    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:28.110224    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:28.117002    5100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:31:28.117293    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:28.117293    5100 round_trippers.go:580]     Audit-Id: 35f8cbc1-51d6-4b4a-b6c5-4c6af5816f17
	I0428 18:31:28.117293    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:28.117293    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:28.117293    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:28.117293    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:28.117293    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:28 GMT
	I0428 18:31:28.118618    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1845"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1831","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86158 chars]
	I0428 18:31:28.122837    5100 system_pods.go:59] 12 kube-system pods found
	I0428 18:31:28.122837    5100 system_pods.go:61] "coredns-7db6d8ff4d-rp2lx" [d6f6f38d-f1f3-454e-a469-c76c8fbc5d99] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "etcd-multinode-788600" [f87bd4ae-4a5c-4587-a9e8-d381c5b76c63] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kindnet-52rrh" [49c6b5f0-286f-4bff-b719-d73a4ea4aaf3] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kindnet-hnvm4" [d01265be-d3ee-47dc-9d72-fd68a6a6eacd] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kindnet-ms872" [9dffcd3e-2cc0-414f-a465-fe37b80ad4bc] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-apiserver-multinode-788600" [5ade8d95-5387-4444-95af-604116cf695e] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-controller-manager-multinode-788600" [b7d7893e-bd95-4f96-879f-a8378040fc03] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-proxy-bkkql" [eccd7725-151c-4770-b99c-cb308b31389c] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-proxy-kc8c4" [340b4c9b-449f-4208-846e-dec867826bf7] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-proxy-sjsfc" [f06aadb7-e646-4105-af2f-0acc4a8ad174] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-scheduler-multinode-788600" [55bd2888-a3b6-498a-9352-8b15bcc5e545] Running
	I0428 18:31:28.123014    5100 system_pods.go:61] "storage-provisioner" [04bc447a-c711-4c23-ad4b-db5fd32b28d2] Running
	I0428 18:31:28.123089    5100 system_pods.go:74] duration metric: took 151.8941ms to wait for pod list to return data ...
	I0428 18:31:28.123142    5100 default_sa.go:34] waiting for default service account to be created ...
	I0428 18:31:28.311814    5100 request.go:629] Waited for 188.3166ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/default/serviceaccounts
	I0428 18:31:28.311814    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/default/serviceaccounts
	I0428 18:31:28.311814    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:28.311814    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:28.311814    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:28.316444    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:28.317105    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:28.317105    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:28.317105    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:28.317204    5100 round_trippers.go:580]     Content-Length: 262
	I0428 18:31:28.317204    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:28 GMT
	I0428 18:31:28.317204    5100 round_trippers.go:580]     Audit-Id: cd65f6c5-26c4-4ad7-aba0-8dea016a8f55
	I0428 18:31:28.317204    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:28.317204    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:28.317204    5100 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1845"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"cd75ac33-a0a3-4b71-9266-aa10ab97a649","resourceVersion":"328","creationTimestamp":"2024-04-29T01:09:02Z"}}]}
	I0428 18:31:28.317550    5100 default_sa.go:45] found service account: "default"
	I0428 18:31:28.317550    5100 default_sa.go:55] duration metric: took 194.4066ms for default service account to be created ...
	I0428 18:31:28.317659    5100 system_pods.go:116] waiting for k8s-apps to be running ...
	I0428 18:31:28.498845    5100 request.go:629] Waited for 181.1371ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:28.499029    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:28.499029    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:28.499029    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:28.499029    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:28.505707    5100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:31:28.505707    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:28.505707    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:28.506263    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:28 GMT
	I0428 18:31:28.506263    5100 round_trippers.go:580]     Audit-Id: aa46fee1-69c6-4bcc-a38e-ab3ddbb26b03
	I0428 18:31:28.506263    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:28.506263    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:28.506263    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:28.507406    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1845"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1831","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86158 chars]
	I0428 18:31:28.512076    5100 system_pods.go:86] 12 kube-system pods found
	I0428 18:31:28.512215    5100 system_pods.go:89] "coredns-7db6d8ff4d-rp2lx" [d6f6f38d-f1f3-454e-a469-c76c8fbc5d99] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "etcd-multinode-788600" [f87bd4ae-4a5c-4587-a9e8-d381c5b76c63] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kindnet-52rrh" [49c6b5f0-286f-4bff-b719-d73a4ea4aaf3] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kindnet-hnvm4" [d01265be-d3ee-47dc-9d72-fd68a6a6eacd] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kindnet-ms872" [9dffcd3e-2cc0-414f-a465-fe37b80ad4bc] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-apiserver-multinode-788600" [5ade8d95-5387-4444-95af-604116cf695e] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-controller-manager-multinode-788600" [b7d7893e-bd95-4f96-879f-a8378040fc03] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-proxy-bkkql" [eccd7725-151c-4770-b99c-cb308b31389c] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-proxy-kc8c4" [340b4c9b-449f-4208-846e-dec867826bf7] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-proxy-sjsfc" [f06aadb7-e646-4105-af2f-0acc4a8ad174] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-scheduler-multinode-788600" [55bd2888-a3b6-498a-9352-8b15bcc5e545] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "storage-provisioner" [04bc447a-c711-4c23-ad4b-db5fd32b28d2] Running
	I0428 18:31:28.512215    5100 system_pods.go:126] duration metric: took 194.5554ms to wait for k8s-apps to be running ...
	I0428 18:31:28.512215    5100 system_svc.go:44] waiting for kubelet service to be running ....
	I0428 18:31:28.523596    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 18:31:28.548090    5100 system_svc.go:56] duration metric: took 35.8758ms WaitForService to wait for kubelet
	I0428 18:31:28.548090    5100 kubeadm.go:576] duration metric: took 11.8200968s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 18:31:28.548090    5100 node_conditions.go:102] verifying NodePressure condition ...
	I0428 18:31:28.702139    5100 request.go:629] Waited for 153.8724ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes
	I0428 18:31:28.702342    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes
	I0428 18:31:28.702342    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:28.702342    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:28.702342    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:28.707188    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:28.707350    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:28.707350    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:28 GMT
	I0428 18:31:28.707350    5100 round_trippers.go:580]     Audit-Id: acdc7926-627b-4787-8c23-2d4f5214c459
	I0428 18:31:28.707350    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:28.707350    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:28.707350    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:28.707350    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:28.707958    5100 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1845"},"items":[{"metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15503 chars]
	I0428 18:31:28.709032    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:28.709032    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:28.709032    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:28.709119    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:28.709119    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:28.709119    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:28.709119    5100 node_conditions.go:105] duration metric: took 161.0283ms to run NodePressure ...
	I0428 18:31:28.709119    5100 start.go:240] waiting for startup goroutines ...
	I0428 18:31:28.709180    5100 start.go:245] waiting for cluster config update ...
	I0428 18:31:28.709206    5100 start.go:254] writing updated cluster config ...
	I0428 18:31:28.713635    5100 out.go:177] 
	I0428 18:31:28.728535    5100 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:31:28.729592    5100 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:31:28.736674    5100 out.go:177] * Starting "multinode-788600-m02" worker node in "multinode-788600" cluster
	I0428 18:31:28.739063    5100 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 18:31:28.739063    5100 cache.go:56] Caching tarball of preloaded images
	I0428 18:31:28.739414    5100 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 18:31:28.739414    5100 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 18:31:28.739414    5100 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:31:28.741647    5100 start.go:360] acquireMachinesLock for multinode-788600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 18:31:28.742058    5100 start.go:364] duration metric: took 410.2µs to acquireMachinesLock for "multinode-788600-m02"
	I0428 18:31:28.742202    5100 start.go:96] Skipping create...Using existing machine configuration
	I0428 18:31:28.742240    5100 fix.go:54] fixHost starting: m02
	I0428 18:31:28.742706    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:30.731719    5100 main.go:141] libmachine: [stdout =====>] : Off
	
	I0428 18:31:30.731719    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:30.731719    5100 fix.go:112] recreateIfNeeded on multinode-788600-m02: state=Stopped err=<nil>
	W0428 18:31:30.731719    5100 fix.go:138] unexpected machine state, will restart: <nil>
	I0428 18:31:30.737932    5100 out.go:177] * Restarting existing hyperv VM for "multinode-788600-m02" ...
	I0428 18:31:30.740224    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-788600-m02
	I0428 18:31:33.744619    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:31:33.744865    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:33.744865    5100 main.go:141] libmachine: Waiting for host to start...
	I0428 18:31:33.744865    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:35.872684    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:31:35.872684    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:35.872684    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:31:38.345518    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:31:38.345783    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:39.349110    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:41.478789    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:31:41.478985    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:41.478985    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:31:43.966341    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:31:43.967262    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:44.974390    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:47.102289    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:31:47.102289    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:47.102510    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:31:49.538127    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:31:49.538127    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:50.538957    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:52.650250    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:31:52.650250    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:52.650250    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:31:55.084780    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:31:55.084780    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:56.086813    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:58.209363    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:31:58.210203    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:58.210203    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:00.710459    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:00.710539    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:00.713463    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:02.772748    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:02.772748    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:02.773382    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:05.249675    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:05.249675    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:05.250138    5100 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:32:05.252945    5100 machine.go:94] provisionDockerMachine start ...
	I0428 18:32:05.253070    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:07.311282    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:07.311648    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:07.311648    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:09.851540    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:09.851968    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:09.857517    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:09.858234    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:09.858234    5100 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 18:32:09.987588    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 18:32:09.987588    5100 buildroot.go:166] provisioning hostname "multinode-788600-m02"
	I0428 18:32:09.987674    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:12.009811    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:12.009993    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:12.010120    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:14.460526    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:14.460526    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:14.466292    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:14.466996    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:14.466996    5100 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-788600-m02 && echo "multinode-788600-m02" | sudo tee /etc/hostname
	I0428 18:32:14.614945    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-788600-m02
	
	I0428 18:32:14.614945    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:16.646763    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:16.647833    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:16.647952    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:19.130150    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:19.130150    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:19.135386    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:19.135386    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:19.135912    5100 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-788600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-788600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-788600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 18:32:19.269802    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 18:32:19.269875    5100 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 18:32:19.269931    5100 buildroot.go:174] setting up certificates
	I0428 18:32:19.269976    5100 provision.go:84] configureAuth start
	I0428 18:32:19.269976    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:21.299985    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:21.299985    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:21.300532    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:23.785896    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:23.785896    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:23.786564    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:25.835274    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:25.835274    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:25.835486    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:28.326513    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:28.327140    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:28.327140    5100 provision.go:143] copyHostCerts
	I0428 18:32:28.327140    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 18:32:28.327140    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 18:32:28.327140    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 18:32:28.328102    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 18:32:28.329575    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 18:32:28.330124    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 18:32:28.330215    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 18:32:28.330287    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 18:32:28.331583    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 18:32:28.331858    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 18:32:28.331858    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 18:32:28.332639    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 18:32:28.333443    5100 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-788600-m02 san=[127.0.0.1 172.27.237.37 localhost minikube multinode-788600-m02]
	I0428 18:32:28.497786    5100 provision.go:177] copyRemoteCerts
	I0428 18:32:28.511364    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 18:32:28.511364    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:30.560256    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:30.560712    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:30.560991    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:33.031720    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:33.032061    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:33.032170    5100 sshutil.go:53] new ssh client: &{IP:172.27.237.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:32:33.145316    5100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.633862s)
	I0428 18:32:33.145411    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 18:32:33.145872    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 18:32:33.198469    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 18:32:33.199250    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0428 18:32:33.249609    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 18:32:33.250115    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 18:32:33.312741    5100 provision.go:87] duration metric: took 14.0427318s to configureAuth
	I0428 18:32:33.312897    5100 buildroot.go:189] setting minikube options for container-runtime
	I0428 18:32:33.313841    5100 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:32:33.314007    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:35.314823    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:35.314823    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:35.314823    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:37.773454    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:37.773454    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:37.780545    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:37.780621    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:37.780621    5100 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 18:32:37.911382    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 18:32:37.911479    5100 buildroot.go:70] root file system type: tmpfs
	I0428 18:32:37.911733    5100 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 18:32:37.911733    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:40.022110    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:40.022110    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:40.022221    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:42.596109    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:42.596981    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:42.603492    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:42.603492    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:42.604065    5100 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.239.170"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 18:32:42.759890    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.239.170
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 18:32:42.759890    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:44.747073    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:44.747511    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:44.747593    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:47.181908    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:47.181908    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:47.188297    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:47.188827    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:47.188827    5100 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 18:32:49.529003    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 18:32:49.529584    5100 machine.go:97] duration metric: took 44.2765326s to provisionDockerMachine
	I0428 18:32:49.529584    5100 start.go:293] postStartSetup for "multinode-788600-m02" (driver="hyperv")
	I0428 18:32:49.529584    5100 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 18:32:49.541764    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 18:32:49.541764    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:51.576610    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:51.576610    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:51.576610    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:54.060378    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:54.060378    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:54.060776    5100 sshutil.go:53] new ssh client: &{IP:172.27.237.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:32:54.169892    5100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.628053s)
	I0428 18:32:54.184389    5100 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 18:32:54.190850    5100 command_runner.go:130] > NAME=Buildroot
	I0428 18:32:54.190850    5100 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0428 18:32:54.190850    5100 command_runner.go:130] > ID=buildroot
	I0428 18:32:54.190850    5100 command_runner.go:130] > VERSION_ID=2023.02.9
	I0428 18:32:54.190850    5100 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0428 18:32:54.191950    5100 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 18:32:54.192074    5100 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 18:32:54.192496    5100 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 18:32:54.193473    5100 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 18:32:54.193473    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 18:32:54.208684    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 18:32:54.228930    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 18:32:54.273925    5100 start.go:296] duration metric: took 4.744136s for postStartSetup
	I0428 18:32:54.274049    5100 fix.go:56] duration metric: took 1m25.5316046s for fixHost
	I0428 18:32:54.274160    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:56.306850    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:56.307120    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:56.307120    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:58.721421    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:58.721421    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:58.729781    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:58.729925    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:58.729925    5100 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0428 18:32:58.850694    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714354378.855254990
	
	I0428 18:32:58.850694    5100 fix.go:216] guest clock: 1714354378.855254990
	I0428 18:32:58.850694    5100 fix.go:229] Guest: 2024-04-28 18:32:58.85525499 -0700 PDT Remote: 2024-04-28 18:32:54.2740494 -0700 PDT m=+227.568030201 (delta=4.58120559s)
	I0428 18:32:58.850694    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:33:00.855861    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:33:00.855861    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:00.855943    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:33:03.353889    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:33:03.354496    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:03.359702    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:33:03.360312    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:33:03.360312    5100 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714354378
	I0428 18:33:03.507702    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 01:32:58 UTC 2024
	
	I0428 18:33:03.507776    5100 fix.go:236] clock set: Mon Apr 29 01:32:58 UTC 2024
	 (err=<nil>)
	I0428 18:33:03.507822    5100 start.go:83] releasing machines lock for "multinode-788600-m02", held for 1m34.7655374s
	I0428 18:33:03.508023    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:33:05.461328    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:33:05.461328    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:05.461328    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:33:07.913230    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:33:07.913475    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:07.916681    5100 out.go:177] * Found network options:
	I0428 18:33:07.927793    5100 out.go:177]   - NO_PROXY=172.27.239.170
	W0428 18:33:07.930394    5100 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 18:33:07.933609    5100 out.go:177]   - NO_PROXY=172.27.239.170
	W0428 18:33:07.935889    5100 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 18:33:07.937225    5100 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 18:33:07.940076    5100 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 18:33:07.940160    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:33:07.950375    5100 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 18:33:07.950375    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:33:10.019451    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:33:10.019451    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:10.019451    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:33:10.050724    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:33:10.051108    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:10.051210    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:33:12.565621    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:33:12.565621    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:12.566812    5100 sshutil.go:53] new ssh client: &{IP:172.27.237.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:33:12.598545    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:33:12.598640    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:12.598771    5100 sshutil.go:53] new ssh client: &{IP:172.27.237.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:33:12.664665    5100 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0428 18:33:12.665276    5100 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7148894s)
	W0428 18:33:12.665374    5100 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 18:33:12.679974    5100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 18:33:12.789857    5100 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0428 18:33:12.790010    5100 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0428 18:33:12.790010    5100 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8497669s)
	I0428 18:33:12.790010    5100 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 18:33:12.790010    5100 start.go:494] detecting cgroup driver to use...
	I0428 18:33:12.790288    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 18:33:12.826620    5100 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0428 18:33:12.841093    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 18:33:12.871023    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 18:33:12.892178    5100 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 18:33:12.905247    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 18:33:12.938633    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 18:33:12.970304    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 18:33:13.001024    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 18:33:13.032485    5100 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 18:33:13.065419    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 18:33:13.096245    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 18:33:13.128214    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 18:33:13.166014    5100 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 18:33:13.183104    5100 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0428 18:33:13.193636    5100 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 18:33:13.223445    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:33:13.433968    5100 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 18:33:13.467059    5100 start.go:494] detecting cgroup driver to use...
	I0428 18:33:13.481994    5100 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 18:33:13.506238    5100 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0428 18:33:13.506238    5100 command_runner.go:130] > [Unit]
	I0428 18:33:13.506238    5100 command_runner.go:130] > Description=Docker Application Container Engine
	I0428 18:33:13.506238    5100 command_runner.go:130] > Documentation=https://docs.docker.com
	I0428 18:33:13.506238    5100 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0428 18:33:13.506238    5100 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0428 18:33:13.506238    5100 command_runner.go:130] > StartLimitBurst=3
	I0428 18:33:13.506238    5100 command_runner.go:130] > StartLimitIntervalSec=60
	I0428 18:33:13.506238    5100 command_runner.go:130] > [Service]
	I0428 18:33:13.506238    5100 command_runner.go:130] > Type=notify
	I0428 18:33:13.506238    5100 command_runner.go:130] > Restart=on-failure
	I0428 18:33:13.506238    5100 command_runner.go:130] > Environment=NO_PROXY=172.27.239.170
	I0428 18:33:13.506238    5100 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0428 18:33:13.506238    5100 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0428 18:33:13.506238    5100 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0428 18:33:13.506238    5100 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0428 18:33:13.506238    5100 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0428 18:33:13.506238    5100 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0428 18:33:13.506238    5100 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0428 18:33:13.506238    5100 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0428 18:33:13.506238    5100 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0428 18:33:13.506238    5100 command_runner.go:130] > ExecStart=
	I0428 18:33:13.506238    5100 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0428 18:33:13.506238    5100 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0428 18:33:13.506238    5100 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0428 18:33:13.506238    5100 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0428 18:33:13.506238    5100 command_runner.go:130] > LimitNOFILE=infinity
	I0428 18:33:13.506238    5100 command_runner.go:130] > LimitNPROC=infinity
	I0428 18:33:13.506781    5100 command_runner.go:130] > LimitCORE=infinity
	I0428 18:33:13.506781    5100 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0428 18:33:13.506781    5100 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0428 18:33:13.506781    5100 command_runner.go:130] > TasksMax=infinity
	I0428 18:33:13.506781    5100 command_runner.go:130] > TimeoutStartSec=0
	I0428 18:33:13.506781    5100 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0428 18:33:13.506781    5100 command_runner.go:130] > Delegate=yes
	I0428 18:33:13.506781    5100 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0428 18:33:13.506781    5100 command_runner.go:130] > KillMode=process
	I0428 18:33:13.506781    5100 command_runner.go:130] > [Install]
	I0428 18:33:13.506781    5100 command_runner.go:130] > WantedBy=multi-user.target
	I0428 18:33:13.520708    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 18:33:13.558375    5100 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 18:33:13.617753    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 18:33:13.659116    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 18:33:13.695731    5100 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 18:33:13.761229    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 18:33:13.785450    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 18:33:13.821474    5100 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0428 18:33:13.835113    5100 ssh_runner.go:195] Run: which cri-dockerd
	I0428 18:33:13.845616    5100 command_runner.go:130] > /usr/bin/cri-dockerd
	I0428 18:33:13.860160    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 18:33:13.876613    5100 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 18:33:13.922608    5100 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 18:33:14.133089    5100 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 18:33:14.319723    5100 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 18:33:14.319858    5100 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 18:33:14.365706    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:33:14.564799    5100 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 18:34:15.692524    5100 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0428 18:34:15.692592    5100 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0428 18:34:15.692592    5100 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1276455s)
	I0428 18:34:15.705979    5100 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0428 18:34:15.728446    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0428 18:34:15.728577    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.809969396Z" level=info msg="Starting up"
	I0428 18:34:15.728577    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.810971814Z" level=info msg="containerd not running, starting managed containerd"
	I0428 18:34:15.728675    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.812287837Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	I0428 18:34:15.728675    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.847769870Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0428 18:34:15.728675    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.874938755Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0428 18:34:15.728747    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875097458Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0428 18:34:15.728747    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875160459Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0428 18:34:15.728818    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875177259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.728840    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875749069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0428 18:34:15.728840    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875908772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.728929    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876188877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0428 18:34:15.728929    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876290779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.728929    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876312679Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0428 18:34:15.729020    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876324280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.877036692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.877872507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881632774Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881737076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881892779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881991681Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883069900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883201902Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883221703Z" level=info msg="metadata content store policy set" policy=shared
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900315007Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900509811Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900578112Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900636113Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900666214Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900753815Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901202723Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901383226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901578330Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901609830Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901628931Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901645731Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729588    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901661531Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729588    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901678632Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729588    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901695332Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729588    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901717232Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729588    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901736033Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729827    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901751733Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729827    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901782434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729827    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901801134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729827    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901817034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729827    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901832734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729938    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901848035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729938    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901869935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729938    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901884435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729987    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901902536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730024    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901919336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730024    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901939336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730024    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901954637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730024    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901970337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730024    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901985537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730108    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902004338Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0428 18:34:15.730108    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902045138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730108    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902061339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730108    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902075139Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0428 18:34:15.730108    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902212941Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0428 18:34:15.730212    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902320843Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0428 18:34:15.730212    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902341244Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0428 18:34:15.730212    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902354644Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0428 18:34:15.730212    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902423045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730212    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902464146Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0428 18:34:15.730378    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902479446Z" level=info msg="NRI interface is disabled by configuration."
	I0428 18:34:15.730378    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903415363Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903706068Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903861271Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.904299478Z" level=info msg="containerd successfully booted in 0.059611s"
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:48 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:48.876990250Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:48 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:48.969290393Z" level=info msg="Loading containers: start."
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.292494295Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0428 18:34:15.730591    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.376103508Z" level=info msg="Loading containers: done."
	I0428 18:34:15.730591    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.420350009Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0428 18:34:15.730591    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.421214025Z" level=info msg="Daemon has completed initialization"
	I0428 18:34:15.730646    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.531900928Z" level=info msg="API listen on /var/run/docker.sock"
	I0428 18:34:15.730646    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.531988129Z" level=info msg="API listen on [::]:2376"
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 systemd[1]: Started Docker Application Container Engine.
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.594905647Z" level=info msg="Processing signal 'terminated'"
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.597013752Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.597659553Z" level=info msg="Daemon shutdown complete"
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.598156755Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0428 18:34:15.730786    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.598169255Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0428 18:34:15.730786    5100 command_runner.go:130] > Apr 29 01:33:15 multinode-788600-m02 systemd[1]: docker.service: Deactivated successfully.
	I0428 18:34:15.730786    5100 command_runner.go:130] > Apr 29 01:33:15 multinode-788600-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0428 18:34:15.730786    5100 command_runner.go:130] > Apr 29 01:33:15 multinode-788600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0428 18:34:15.730910    5100 command_runner.go:130] > Apr 29 01:33:15 multinode-788600-m02 dockerd[1045]: time="2024-04-29T01:33:15.672598455Z" level=info msg="Starting up"
	I0428 18:34:15.730910    5100 command_runner.go:130] > Apr 29 01:34:15 multinode-788600-m02 dockerd[1045]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0428 18:34:15.730910    5100 command_runner.go:130] > Apr 29 01:34:15 multinode-788600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0428 18:34:15.730910    5100 command_runner.go:130] > Apr 29 01:34:15 multinode-788600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0428 18:34:15.730910    5100 command_runner.go:130] > Apr 29 01:34:15 multinode-788600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0428 18:34:15.739602    5100 out.go:177] 
	W0428 18:34:15.742382    5100 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 01:32:47 multinode-788600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.809969396Z" level=info msg="Starting up"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.810971814Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.812287837Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.847769870Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.874938755Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875097458Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875160459Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875177259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875749069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875908772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876188877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876290779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876312679Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876324280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.877036692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.877872507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881632774Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881737076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881892779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881991681Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883069900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883201902Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883221703Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900315007Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900509811Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900578112Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900636113Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900666214Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900753815Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901202723Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901383226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901578330Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901609830Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901628931Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901645731Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901661531Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901678632Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901695332Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901717232Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901736033Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901751733Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901782434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901801134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901817034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901832734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901848035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901869935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901884435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901902536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901919336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901939336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901954637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901970337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901985537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902004338Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902045138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902061339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902075139Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902212941Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902320843Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902341244Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902354644Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902423045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902464146Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902479446Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903415363Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903706068Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903861271Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.904299478Z" level=info msg="containerd successfully booted in 0.059611s"
	Apr 29 01:32:48 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:48.876990250Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 01:32:48 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:48.969290393Z" level=info msg="Loading containers: start."
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.292494295Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.376103508Z" level=info msg="Loading containers: done."
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.420350009Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.421214025Z" level=info msg="Daemon has completed initialization"
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.531900928Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.531988129Z" level=info msg="API listen on [::]:2376"
	Apr 29 01:32:49 multinode-788600-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 01:33:14 multinode-788600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.594905647Z" level=info msg="Processing signal 'terminated'"
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.597013752Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.597659553Z" level=info msg="Daemon shutdown complete"
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.598156755Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.598169255Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 01:33:15 multinode-788600-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 01:33:15 multinode-788600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 01:33:15 multinode-788600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 01:33:15 multinode-788600-m02 dockerd[1045]: time="2024-04-29T01:33:15.672598455Z" level=info msg="Starting up"
	Apr 29 01:34:15 multinode-788600-m02 dockerd[1045]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 01:34:15 multinode-788600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 01:34:15 multinode-788600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 01:34:15 multinode-788600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 01:32:47 multinode-788600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.809969396Z" level=info msg="Starting up"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.810971814Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.812287837Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.847769870Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.874938755Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875097458Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875160459Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875177259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875749069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875908772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876188877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876290779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876312679Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876324280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.877036692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.877872507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881632774Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881737076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881892779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881991681Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883069900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883201902Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883221703Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900315007Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900509811Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900578112Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900636113Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900666214Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900753815Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901202723Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901383226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901578330Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901609830Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901628931Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901645731Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901661531Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901678632Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901695332Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901717232Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901736033Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901751733Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901782434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901801134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901817034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901832734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901848035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901869935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901884435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901902536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901919336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901939336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901954637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901970337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901985537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902004338Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902045138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902061339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902075139Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902212941Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902320843Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902341244Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902354644Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902423045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902464146Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902479446Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903415363Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903706068Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903861271Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.904299478Z" level=info msg="containerd successfully booted in 0.059611s"
	Apr 29 01:32:48 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:48.876990250Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 01:32:48 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:48.969290393Z" level=info msg="Loading containers: start."
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.292494295Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.376103508Z" level=info msg="Loading containers: done."
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.420350009Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.421214025Z" level=info msg="Daemon has completed initialization"
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.531900928Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.531988129Z" level=info msg="API listen on [::]:2376"
	Apr 29 01:32:49 multinode-788600-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 01:33:14 multinode-788600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.594905647Z" level=info msg="Processing signal 'terminated'"
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.597013752Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.597659553Z" level=info msg="Daemon shutdown complete"
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.598156755Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.598169255Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 01:33:15 multinode-788600-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 01:33:15 multinode-788600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 01:33:15 multinode-788600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 01:33:15 multinode-788600-m02 dockerd[1045]: time="2024-04-29T01:33:15.672598455Z" level=info msg="Starting up"
	Apr 29 01:34:15 multinode-788600-m02 dockerd[1045]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 01:34:15 multinode-788600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 01:34:15 multinode-788600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 01:34:15 multinode-788600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0428 18:34:15.742938    5100 out.go:239] * 
	* 
	W0428 18:34:15.744099    5100 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 18:34:15.746768    5100 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-788600" : exit status 90
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-788600
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-788600	172.27.231.169
multinode-788600-m02	172.27.230.221
multinode-788600-m03	172.27.237.64

                                                
                                                
After restart: multinode-788600	172.27.239.170
multinode-788600-m02	172.27.237.37
multinode-788600-m03	172.27.237.64
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-788600 -n multinode-788600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-788600 -n multinode-788600: (11.6561292s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 logs -n 25: (8.4707599s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | multinode-788600 ssh -n                                                                                                  | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:19 PDT | 28 Apr 24 18:20 PDT |
	|         | multinode-788600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-788600 cp multinode-788600-m02:/home/docker/cp-test.txt                                                        | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:20 PDT | 28 Apr 24 18:20 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile2232407997\001\cp-test_multinode-788600-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n                                                                                                  | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:20 PDT | 28 Apr 24 18:20 PDT |
	|         | multinode-788600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-788600 cp multinode-788600-m02:/home/docker/cp-test.txt                                                        | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:20 PDT | 28 Apr 24 18:20 PDT |
	|         | multinode-788600:/home/docker/cp-test_multinode-788600-m02_multinode-788600.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n                                                                                                  | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:20 PDT | 28 Apr 24 18:20 PDT |
	|         | multinode-788600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n multinode-788600 sudo cat                                                                        | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:20 PDT | 28 Apr 24 18:21 PDT |
	|         | /home/docker/cp-test_multinode-788600-m02_multinode-788600.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-788600 cp multinode-788600-m02:/home/docker/cp-test.txt                                                        | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:21 PDT | 28 Apr 24 18:21 PDT |
	|         | multinode-788600-m03:/home/docker/cp-test_multinode-788600-m02_multinode-788600-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n                                                                                                  | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:21 PDT | 28 Apr 24 18:21 PDT |
	|         | multinode-788600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n multinode-788600-m03 sudo cat                                                                    | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:21 PDT | 28 Apr 24 18:21 PDT |
	|         | /home/docker/cp-test_multinode-788600-m02_multinode-788600-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-788600 cp testdata\cp-test.txt                                                                                 | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:21 PDT | 28 Apr 24 18:21 PDT |
	|         | multinode-788600-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n                                                                                                  | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:21 PDT | 28 Apr 24 18:21 PDT |
	|         | multinode-788600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-788600 cp multinode-788600-m03:/home/docker/cp-test.txt                                                        | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:21 PDT | 28 Apr 24 18:22 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile2232407997\001\cp-test_multinode-788600-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n                                                                                                  | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:22 PDT | 28 Apr 24 18:22 PDT |
	|         | multinode-788600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-788600 cp multinode-788600-m03:/home/docker/cp-test.txt                                                        | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:22 PDT | 28 Apr 24 18:22 PDT |
	|         | multinode-788600:/home/docker/cp-test_multinode-788600-m03_multinode-788600.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n                                                                                                  | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:22 PDT | 28 Apr 24 18:22 PDT |
	|         | multinode-788600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n multinode-788600 sudo cat                                                                        | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:22 PDT | 28 Apr 24 18:22 PDT |
	|         | /home/docker/cp-test_multinode-788600-m03_multinode-788600.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-788600 cp multinode-788600-m03:/home/docker/cp-test.txt                                                        | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:22 PDT | 28 Apr 24 18:23 PDT |
	|         | multinode-788600-m02:/home/docker/cp-test_multinode-788600-m03_multinode-788600-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n                                                                                                  | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:23 PDT | 28 Apr 24 18:23 PDT |
	|         | multinode-788600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n multinode-788600-m02 sudo cat                                                                    | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:23 PDT | 28 Apr 24 18:23 PDT |
	|         | /home/docker/cp-test_multinode-788600-m03_multinode-788600-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-788600 node stop m03                                                                                           | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:23 PDT | 28 Apr 24 18:23 PDT |
	| node    | multinode-788600 node start                                                                                              | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:24 PDT | 28 Apr 24 18:26 PDT |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-788600                                                                                                 | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:27 PDT |                     |
	| stop    | -p multinode-788600                                                                                                      | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:27 PDT | 28 Apr 24 18:29 PDT |
	| start   | -p multinode-788600                                                                                                      | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:29 PDT |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	| node    | list -p multinode-788600                                                                                                 | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:34 PDT |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 18:29:06
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 18:29:06.809727    5100 out.go:291] Setting OutFile to fd 1908 ...
	I0428 18:29:06.810353    5100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 18:29:06.810353    5100 out.go:304] Setting ErrFile to fd 1912...
	I0428 18:29:06.810353    5100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 18:29:06.834778    5100 out.go:298] Setting JSON to false
	I0428 18:29:06.838611    5100 start.go:129] hostinfo: {"hostname":"minikube1","uptime":11589,"bootTime":1714342556,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 18:29:06.838611    5100 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 18:29:06.940529    5100 out.go:177] * [multinode-788600] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 18:29:07.030586    5100 notify.go:220] Checking for updates...
	I0428 18:29:07.077632    5100 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:29:07.374230    5100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 18:29:07.485070    5100 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 18:29:07.638229    5100 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 18:29:07.772014    5100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 18:29:07.826039    5100 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:29:07.826481    5100 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 18:29:13.079444    5100 out.go:177] * Using the hyperv driver based on existing profile
	I0428 18:29:13.183795    5100 start.go:297] selected driver: hyperv
	I0428 18:29:13.183795    5100 start.go:901] validating driver "hyperv" against &{Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.231.169 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.230.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.237.64 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 18:29:13.184921    5100 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 18:29:13.238392    5100 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 18:29:13.239401    5100 cni.go:84] Creating CNI manager for ""
	I0428 18:29:13.239401    5100 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0428 18:29:13.239658    5100 start.go:340] cluster config:
	{Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-788600 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.231.169 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.230.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.237.64 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 18:29:13.239658    5100 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 18:29:13.267965    5100 out.go:177] * Starting "multinode-788600" primary control-plane node in "multinode-788600" cluster
	I0428 18:29:13.273325    5100 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 18:29:13.273757    5100 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 18:29:13.273855    5100 cache.go:56] Caching tarball of preloaded images
	I0428 18:29:13.274319    5100 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 18:29:13.274564    5100 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 18:29:13.274592    5100 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:29:13.277394    5100 start.go:360] acquireMachinesLock for multinode-788600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 18:29:13.277394    5100 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-788600"
	I0428 18:29:13.278010    5100 start.go:96] Skipping create...Using existing machine configuration
	I0428 18:29:13.278010    5100 fix.go:54] fixHost starting: 
	I0428 18:29:13.278669    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:15.841355    5100 main.go:141] libmachine: [stdout =====>] : Off
	
	I0428 18:29:15.841355    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:15.841437    5100 fix.go:112] recreateIfNeeded on multinode-788600: state=Stopped err=<nil>
	W0428 18:29:15.841437    5100 fix.go:138] unexpected machine state, will restart: <nil>
	I0428 18:29:15.844029    5100 out.go:177] * Restarting existing hyperv VM for "multinode-788600" ...
	I0428 18:29:15.847206    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-788600
	I0428 18:29:18.788290    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:29:18.788290    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:18.788290    5100 main.go:141] libmachine: Waiting for host to start...
	I0428 18:29:18.788290    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:20.894990    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:20.894990    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:20.894990    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:23.329935    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:29:23.329986    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:24.337456    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:26.424769    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:26.424769    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:26.424959    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:28.835446    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:29:28.835446    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:29.845210    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:31.915507    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:31.915507    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:31.916194    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:34.321357    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:29:34.321830    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:35.322335    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:37.477391    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:37.477391    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:37.477391    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:39.926983    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:29:39.926983    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:40.928783    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:43.017582    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:43.018601    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:43.018670    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:45.467215    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:29:45.467701    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:45.470855    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:47.452061    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:47.453391    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:47.453481    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:49.918620    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:29:49.918620    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:49.919129    5100 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:29:49.921224    5100 machine.go:94] provisionDockerMachine start ...
	I0428 18:29:49.921854    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:51.906534    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:51.906962    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:51.906962    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:54.344777    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:29:54.345162    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:54.351253    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:29:54.351970    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:29:54.351970    5100 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 18:29:54.482939    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 18:29:54.483063    5100 buildroot.go:166] provisioning hostname "multinode-788600"
	I0428 18:29:54.483182    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:56.467562    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:56.467562    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:56.467562    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:58.861415    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:29:58.861500    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:58.866474    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:29:58.867158    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:29:58.867158    5100 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-788600 && echo "multinode-788600" | sudo tee /etc/hostname
	I0428 18:29:59.026469    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-788600
	
	I0428 18:29:59.027057    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:01.078535    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:01.078960    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:01.079062    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:03.473105    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:03.473105    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:03.480109    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:03.480643    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:03.480643    5100 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-788600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-788600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-788600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 18:30:03.632326    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 18:30:03.632436    5100 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 18:30:03.632436    5100 buildroot.go:174] setting up certificates
	I0428 18:30:03.632533    5100 provision.go:84] configureAuth start
	I0428 18:30:03.632662    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:05.623591    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:05.623591    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:05.623674    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:07.995919    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:07.996008    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:07.996008    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:09.994705    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:09.994705    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:09.994978    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:12.476810    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:12.476810    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:12.476810    5100 provision.go:143] copyHostCerts
	I0428 18:30:12.477065    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 18:30:12.477065    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 18:30:12.477065    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 18:30:12.477997    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 18:30:12.479104    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 18:30:12.479438    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 18:30:12.479438    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 18:30:12.479915    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 18:30:12.480977    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 18:30:12.481170    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 18:30:12.481170    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 18:30:12.481170    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 18:30:12.482569    5100 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-788600 san=[127.0.0.1 172.27.239.170 localhost minikube multinode-788600]
	I0428 18:30:12.565240    5100 provision.go:177] copyRemoteCerts
	I0428 18:30:12.578456    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 18:30:12.578546    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:14.563247    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:14.563247    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:14.564084    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:17.004731    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:17.004884    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:17.005001    5100 sshutil.go:53] new ssh client: &{IP:172.27.239.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:30:17.120514    5100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5420479s)
	I0428 18:30:17.120569    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 18:30:17.121103    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 18:30:17.169984    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 18:30:17.170584    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0428 18:30:17.216472    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 18:30:17.216472    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0428 18:30:17.262921    5100 provision.go:87] duration metric: took 13.630358s to configureAuth
	I0428 18:30:17.262921    5100 buildroot.go:189] setting minikube options for container-runtime
	I0428 18:30:17.263897    5100 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:30:17.264012    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:19.259871    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:19.259871    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:19.260050    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:21.723377    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:21.723454    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:21.729319    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:21.730083    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:21.730083    5100 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 18:30:21.872016    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 18:30:21.872016    5100 buildroot.go:70] root file system type: tmpfs
	I0428 18:30:21.872016    5100 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 18:30:21.872016    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:23.896924    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:23.896924    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:23.896924    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:26.313949    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:26.313949    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:26.322783    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:26.322938    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:26.322938    5100 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 18:30:26.486115    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 18:30:26.486115    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:28.470749    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:28.470749    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:28.470749    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:30.893142    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:30.893142    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:30.900075    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:30.900075    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:30.900075    5100 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 18:30:33.420018    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 18:30:33.420018    5100 machine.go:97] duration metric: took 43.498168s to provisionDockerMachine
	I0428 18:30:33.420018    5100 start.go:293] postStartSetup for "multinode-788600" (driver="hyperv")
	I0428 18:30:33.420018    5100 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 18:30:33.433580    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 18:30:33.433580    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:35.421597    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:35.421597    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:35.421967    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:37.810277    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:37.811012    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:37.811315    5100 sshutil.go:53] new ssh client: &{IP:172.27.239.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:30:37.920287    5100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4866971s)
	I0428 18:30:37.932767    5100 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 18:30:37.939254    5100 command_runner.go:130] > NAME=Buildroot
	I0428 18:30:37.939254    5100 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0428 18:30:37.939254    5100 command_runner.go:130] > ID=buildroot
	I0428 18:30:37.939254    5100 command_runner.go:130] > VERSION_ID=2023.02.9
	I0428 18:30:37.939254    5100 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0428 18:30:37.939254    5100 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 18:30:37.939254    5100 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 18:30:37.939952    5100 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 18:30:37.940475    5100 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 18:30:37.940475    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 18:30:37.952512    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 18:30:37.969990    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 18:30:38.017497    5100 start.go:296] duration metric: took 4.5974689s for postStartSetup
	I0428 18:30:38.018511    5100 fix.go:56] duration metric: took 1m24.7403132s for fixHost
	I0428 18:30:38.018511    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:40.002285    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:40.002569    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:40.002569    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:42.426765    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:42.427054    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:42.433213    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:42.433408    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:42.433408    5100 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 18:30:42.568495    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714354242.563104735
	
	I0428 18:30:42.568584    5100 fix.go:216] guest clock: 1714354242.563104735
	I0428 18:30:42.568584    5100 fix.go:229] Guest: 2024-04-28 18:30:42.563104735 -0700 PDT Remote: 2024-04-28 18:30:38.018511 -0700 PDT m=+91.312813201 (delta=4.544593735s)
	I0428 18:30:42.568783    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:44.528614    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:44.528614    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:44.529235    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:46.913452    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:46.913716    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:46.920153    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:46.920882    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:46.921041    5100 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714354242
	I0428 18:30:47.066116    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 01:30:42 UTC 2024
	
	I0428 18:30:47.066116    5100 fix.go:236] clock set: Mon Apr 29 01:30:42 UTC 2024
	 (err=<nil>)
	I0428 18:30:47.066675    5100 start.go:83] releasing machines lock for "multinode-788600", held for 1m33.788514s
	I0428 18:30:47.066769    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:49.059891    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:49.060388    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:49.060388    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:51.541826    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:51.541826    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:51.545987    5100 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 18:30:51.546223    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:51.556964    5100 ssh_runner.go:195] Run: cat /version.json
	I0428 18:30:51.556964    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:53.612244    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:53.612244    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:53.612244    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:53.622682    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:53.622789    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:53.622943    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:56.119241    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:56.119241    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:56.120395    5100 sshutil.go:53] new ssh client: &{IP:172.27.239.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:30:56.153523    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:56.154538    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:56.154788    5100 sshutil.go:53] new ssh client: &{IP:172.27.239.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:30:56.212733    5100 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0428 18:30:56.212822    5100 ssh_runner.go:235] Completed: cat /version.json: (4.6558463s)
	I0428 18:30:56.227331    5100 ssh_runner.go:195] Run: systemctl --version
	I0428 18:30:56.298961    5100 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0428 18:30:56.299087    5100 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7530013s)
	I0428 18:30:56.299087    5100 command_runner.go:130] > systemd 252 (252)
	I0428 18:30:56.299087    5100 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0428 18:30:56.311091    5100 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 18:30:56.322712    5100 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0428 18:30:56.323363    5100 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 18:30:56.335996    5100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 18:30:56.368726    5100 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0428 18:30:56.368854    5100 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 18:30:56.368894    5100 start.go:494] detecting cgroup driver to use...
	I0428 18:30:56.369158    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 18:30:56.408119    5100 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0428 18:30:56.420239    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 18:30:56.450407    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 18:30:56.468615    5100 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 18:30:56.483087    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 18:30:56.518413    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 18:30:56.551580    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 18:30:56.590655    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 18:30:56.627626    5100 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 18:30:56.668610    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 18:30:56.707360    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 18:30:56.741109    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 18:30:56.772199    5100 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 18:30:56.789910    5100 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0428 18:30:56.802591    5100 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 18:30:56.831586    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:30:57.029306    5100 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 18:30:57.065129    5100 start.go:494] detecting cgroup driver to use...
	I0428 18:30:57.081225    5100 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 18:30:57.104967    5100 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0428 18:30:57.104967    5100 command_runner.go:130] > [Unit]
	I0428 18:30:57.104967    5100 command_runner.go:130] > Description=Docker Application Container Engine
	I0428 18:30:57.104967    5100 command_runner.go:130] > Documentation=https://docs.docker.com
	I0428 18:30:57.105037    5100 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0428 18:30:57.105037    5100 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0428 18:30:57.105037    5100 command_runner.go:130] > StartLimitBurst=3
	I0428 18:30:57.105073    5100 command_runner.go:130] > StartLimitIntervalSec=60
	I0428 18:30:57.105073    5100 command_runner.go:130] > [Service]
	I0428 18:30:57.105117    5100 command_runner.go:130] > Type=notify
	I0428 18:30:57.105117    5100 command_runner.go:130] > Restart=on-failure
	I0428 18:30:57.105117    5100 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0428 18:30:57.105156    5100 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0428 18:30:57.105156    5100 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0428 18:30:57.105210    5100 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0428 18:30:57.105210    5100 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0428 18:30:57.105250    5100 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0428 18:30:57.105250    5100 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0428 18:30:57.105301    5100 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0428 18:30:57.105357    5100 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0428 18:30:57.105357    5100 command_runner.go:130] > ExecStart=
	I0428 18:30:57.105357    5100 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0428 18:30:57.105357    5100 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0428 18:30:57.105357    5100 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0428 18:30:57.105357    5100 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0428 18:30:57.105357    5100 command_runner.go:130] > LimitNOFILE=infinity
	I0428 18:30:57.105357    5100 command_runner.go:130] > LimitNPROC=infinity
	I0428 18:30:57.105357    5100 command_runner.go:130] > LimitCORE=infinity
	I0428 18:30:57.105357    5100 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0428 18:30:57.105357    5100 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0428 18:30:57.105357    5100 command_runner.go:130] > TasksMax=infinity
	I0428 18:30:57.105357    5100 command_runner.go:130] > TimeoutStartSec=0
	I0428 18:30:57.105357    5100 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0428 18:30:57.105357    5100 command_runner.go:130] > Delegate=yes
	I0428 18:30:57.105357    5100 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0428 18:30:57.105357    5100 command_runner.go:130] > KillMode=process
	I0428 18:30:57.105357    5100 command_runner.go:130] > [Install]
	I0428 18:30:57.105357    5100 command_runner.go:130] > WantedBy=multi-user.target
	I0428 18:30:57.118659    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 18:30:57.153965    5100 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 18:30:57.204253    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 18:30:57.240015    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 18:30:57.277276    5100 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 18:30:57.345718    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 18:30:57.371346    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 18:30:57.409737    5100 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0428 18:30:57.423205    5100 ssh_runner.go:195] Run: which cri-dockerd
	I0428 18:30:57.430233    5100 command_runner.go:130] > /usr/bin/cri-dockerd
	I0428 18:30:57.441325    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 18:30:57.458054    5100 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 18:30:57.502947    5100 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 18:30:57.700154    5100 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 18:30:57.882896    5100 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 18:30:57.883180    5100 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 18:30:57.927721    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:30:58.124953    5100 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 18:31:00.770105    5100 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6450046s)
	I0428 18:31:00.781386    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0428 18:31:00.815860    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 18:31:00.858671    5100 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0428 18:31:01.050250    5100 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0428 18:31:01.245194    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:31:01.445475    5100 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0428 18:31:01.496426    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 18:31:01.534763    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:31:01.718829    5100 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0428 18:31:01.836605    5100 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0428 18:31:01.857291    5100 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0428 18:31:01.874846    5100 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0428 18:31:01.874846    5100 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0428 18:31:01.874846    5100 command_runner.go:130] > Device: 0,22	Inode: 858         Links: 1
	I0428 18:31:01.874846    5100 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0428 18:31:01.874846    5100 command_runner.go:130] > Access: 2024-04-29 01:31:01.743369559 +0000
	I0428 18:31:01.874846    5100 command_runner.go:130] > Modify: 2024-04-29 01:31:01.743369559 +0000
	I0428 18:31:01.874846    5100 command_runner.go:130] > Change: 2024-04-29 01:31:01.748369612 +0000
	I0428 18:31:01.874846    5100 command_runner.go:130] >  Birth: -
	I0428 18:31:01.874846    5100 start.go:562] Will wait 60s for crictl version
	I0428 18:31:01.887754    5100 ssh_runner.go:195] Run: which crictl
	I0428 18:31:01.894982    5100 command_runner.go:130] > /usr/bin/crictl
	I0428 18:31:01.907488    5100 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 18:31:01.975356    5100 command_runner.go:130] > Version:  0.1.0
	I0428 18:31:01.975356    5100 command_runner.go:130] > RuntimeName:  docker
	I0428 18:31:01.975356    5100 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0428 18:31:01.975356    5100 command_runner.go:130] > RuntimeApiVersion:  v1
	I0428 18:31:01.975356    5100 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0428 18:31:01.984920    5100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 18:31:02.021960    5100 command_runner.go:130] > 26.0.2
	I0428 18:31:02.031724    5100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 18:31:02.062921    5100 command_runner.go:130] > 26.0.2
	I0428 18:31:02.067738    5100 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0428 18:31:02.067738    5100 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0428 18:31:02.069125    5100 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0428 18:31:02.072991    5100 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0428 18:31:02.072991    5100 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0428 18:31:02.072991    5100 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d9:46:b5 Flags:up|broadcast|multicast|running}
	I0428 18:31:02.073587    5100 ip.go:210] interface addr: fe80::ddcc:c7f1:f829:ae2f/64
	I0428 18:31:02.073587    5100 ip.go:210] interface addr: 172.27.224.1/20
	I0428 18:31:02.090160    5100 ssh_runner.go:195] Run: grep 172.27.224.1	host.minikube.internal$ /etc/hosts
	I0428 18:31:02.096353    5100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 18:31:02.117037    5100 kubeadm.go:877] updating cluster {Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.239.170 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.230.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.237.64 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 18:31:02.117328    5100 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 18:31:02.126708    5100 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0428 18:31:02.150678    5100 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0428 18:31:02.151177    5100 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 18:31:02.151177    5100 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0428 18:31:02.151177    5100 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0428 18:31:02.151177    5100 docker.go:615] Images already preloaded, skipping extraction
	I0428 18:31:02.161895    5100 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0428 18:31:02.183468    5100 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0428 18:31:02.183468    5100 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 18:31:02.183468    5100 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0428 18:31:02.183468    5100 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0428 18:31:02.183468    5100 cache_images.go:84] Images are preloaded, skipping loading
	I0428 18:31:02.183468    5100 kubeadm.go:928] updating node { 172.27.239.170 8443 v1.30.0 docker true true} ...
	I0428 18:31:02.183468    5100 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-788600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.239.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 18:31:02.192446    5100 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0428 18:31:02.227627    5100 command_runner.go:130] > cgroupfs
	I0428 18:31:02.227627    5100 cni.go:84] Creating CNI manager for ""
	I0428 18:31:02.227627    5100 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0428 18:31:02.227627    5100 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 18:31:02.227627    5100 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.239.170 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-788600 NodeName:multinode-788600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.239.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.239.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 18:31:02.228352    5100 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.239.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-788600"
	  kubeletExtraArgs:
	    node-ip: 172.27.239.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.239.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 18:31:02.243724    5100 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 18:31:02.263782    5100 command_runner.go:130] > kubeadm
	I0428 18:31:02.263782    5100 command_runner.go:130] > kubectl
	I0428 18:31:02.263782    5100 command_runner.go:130] > kubelet
	I0428 18:31:02.263782    5100 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 18:31:02.277865    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0428 18:31:02.295334    5100 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0428 18:31:02.327593    5100 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 18:31:02.355898    5100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0428 18:31:02.400601    5100 ssh_runner.go:195] Run: grep 172.27.239.170	control-plane.minikube.internal$ /etc/hosts
	I0428 18:31:02.407693    5100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.239.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 18:31:02.442067    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:31:02.626741    5100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 18:31:02.665784    5100 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600 for IP: 172.27.239.170
	I0428 18:31:02.665784    5100 certs.go:194] generating shared ca certs ...
	I0428 18:31:02.665784    5100 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:02.666397    5100 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0428 18:31:02.667047    5100 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0428 18:31:02.667047    5100 certs.go:256] generating profile certs ...
	I0428 18:31:02.667730    5100 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\client.key
	I0428 18:31:02.668417    5100 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key.bf279c66
	I0428 18:31:02.668505    5100 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt.bf279c66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.239.170]
	I0428 18:31:03.091055    5100 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt.bf279c66 ...
	I0428 18:31:03.091055    5100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt.bf279c66: {Name:mkaf1a9c903a6c9cf9004a34772c2d8b3ee15342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:03.093044    5100 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key.bf279c66 ...
	I0428 18:31:03.093044    5100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key.bf279c66: {Name:mk024a6f259c1625f6490ba1e52b63b460f3073d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:03.094536    5100 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt.bf279c66 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt
	I0428 18:31:03.107123    5100 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key.bf279c66 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key
	I0428 18:31:03.109129    5100 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.key
	I0428 18:31:03.109129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 18:31:03.109129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0428 18:31:03.109129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 18:31:03.109129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 18:31:03.110129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 18:31:03.110129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 18:31:03.110129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 18:31:03.110129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 18:31:03.110129    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem (1338 bytes)
	W0428 18:31:03.111127    5100 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228_empty.pem, impossibly tiny 0 bytes
	I0428 18:31:03.111127    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0428 18:31:03.111127    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0428 18:31:03.111127    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0428 18:31:03.112121    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0428 18:31:03.112121    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem (1708 bytes)
	I0428 18:31:03.112121    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem -> /usr/share/ca-certificates/3228.pem
	I0428 18:31:03.112121    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /usr/share/ca-certificates/32282.pem
	I0428 18:31:03.112121    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:31:03.113143    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 18:31:03.164538    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0428 18:31:03.213913    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 18:31:03.259463    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 18:31:03.307159    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0428 18:31:03.356708    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0428 18:31:03.409218    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 18:31:03.461775    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0428 18:31:03.502141    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem --> /usr/share/ca-certificates/3228.pem (1338 bytes)
	I0428 18:31:03.549108    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /usr/share/ca-certificates/32282.pem (1708 bytes)
	I0428 18:31:03.597203    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 18:31:03.642354    5100 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 18:31:03.686876    5100 ssh_runner.go:195] Run: openssl version
	I0428 18:31:03.696135    5100 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0428 18:31:03.708139    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3228.pem && ln -fs /usr/share/ca-certificates/3228.pem /etc/ssl/certs/3228.pem"
	I0428 18:31:03.745183    5100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3228.pem
	I0428 18:31:03.753163    5100 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 18:31:03.753526    5100 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 18:31:03.765193    5100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3228.pem
	I0428 18:31:03.774235    5100 command_runner.go:130] > 51391683
	I0428 18:31:03.786397    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3228.pem /etc/ssl/certs/51391683.0"
	I0428 18:31:03.814386    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32282.pem && ln -fs /usr/share/ca-certificates/32282.pem /etc/ssl/certs/32282.pem"
	I0428 18:31:03.850195    5100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32282.pem
	I0428 18:31:03.857810    5100 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 18:31:03.857810    5100 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 18:31:03.870129    5100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32282.pem
	I0428 18:31:03.878498    5100 command_runner.go:130] > 3ec20f2e
	I0428 18:31:03.890751    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32282.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 18:31:03.922266    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 18:31:03.952546    5100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:31:03.960640    5100 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:31:03.960640    5100 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:31:03.973542    5100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:31:03.982547    5100 command_runner.go:130] > b5213941
	I0428 18:31:03.992543    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 18:31:04.020878    5100 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 18:31:04.027800    5100 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 18:31:04.027800    5100 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0428 18:31:04.027800    5100 command_runner.go:130] > Device: 8,1	Inode: 9431378     Links: 1
	I0428 18:31:04.027800    5100 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0428 18:31:04.027800    5100 command_runner.go:130] > Access: 2024-04-29 01:08:36.420738580 +0000
	I0428 18:31:04.027800    5100 command_runner.go:130] > Modify: 2024-04-29 01:08:36.420738580 +0000
	I0428 18:31:04.027800    5100 command_runner.go:130] > Change: 2024-04-29 01:08:36.420738580 +0000
	I0428 18:31:04.027800    5100 command_runner.go:130] >  Birth: 2024-04-29 01:08:36.420738580 +0000
	I0428 18:31:04.039221    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0428 18:31:04.049656    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.061648    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0428 18:31:04.075450    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.089519    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0428 18:31:04.099116    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.110882    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0428 18:31:04.120974    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.133464    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0428 18:31:04.146142    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.158268    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0428 18:31:04.167665    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.168193    5100 kubeadm.go:391] StartCluster: {Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.239.170 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.230.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.237.64 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 18:31:04.178224    5100 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 18:31:04.213190    5100 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 18:31:04.233991    5100 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0428 18:31:04.233991    5100 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0428 18:31:04.233991    5100 command_runner.go:130] > /var/lib/minikube/etcd:
	I0428 18:31:04.233991    5100 command_runner.go:130] > member
	W0428 18:31:04.233991    5100 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0428 18:31:04.233991    5100 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0428 18:31:04.233991    5100 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0428 18:31:04.244993    5100 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0428 18:31:04.263105    5100 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0428 18:31:04.263871    5100 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-788600" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:31:04.264562    5100 kubeconfig.go:62] C:\Users\jenkins.minikube1\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-788600" cluster setting kubeconfig missing "multinode-788600" context setting]
	I0428 18:31:04.265326    5100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:04.279100    5100 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:31:04.279824    5100 kapi.go:59] client config for multinode-788600: &rest.Config{Host:"https://172.27.239.170:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-788600/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-788600/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 18:31:04.281162    5100 cert_rotation.go:137] Starting client certificate rotation controller
	I0428 18:31:04.294422    5100 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0428 18:31:04.312988    5100 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0428 18:31:04.312988    5100 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0428 18:31:04.312988    5100 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0428 18:31:04.312988    5100 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0428 18:31:04.312988    5100 command_runner.go:130] >  kind: InitConfiguration
	I0428 18:31:04.312988    5100 command_runner.go:130] >  localAPIEndpoint:
	I0428 18:31:04.312988    5100 command_runner.go:130] > -  advertiseAddress: 172.27.231.169
	I0428 18:31:04.312988    5100 command_runner.go:130] > +  advertiseAddress: 172.27.239.170
	I0428 18:31:04.312988    5100 command_runner.go:130] >    bindPort: 8443
	I0428 18:31:04.312988    5100 command_runner.go:130] >  bootstrapTokens:
	I0428 18:31:04.312988    5100 command_runner.go:130] >    - groups:
	I0428 18:31:04.312988    5100 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0428 18:31:04.312988    5100 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0428 18:31:04.312988    5100 command_runner.go:130] >    name: "multinode-788600"
	I0428 18:31:04.312988    5100 command_runner.go:130] >    kubeletExtraArgs:
	I0428 18:31:04.312988    5100 command_runner.go:130] > -    node-ip: 172.27.231.169
	I0428 18:31:04.312988    5100 command_runner.go:130] > +    node-ip: 172.27.239.170
	I0428 18:31:04.312988    5100 command_runner.go:130] >    taints: []
	I0428 18:31:04.312988    5100 command_runner.go:130] >  ---
	I0428 18:31:04.312988    5100 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0428 18:31:04.312988    5100 command_runner.go:130] >  kind: ClusterConfiguration
	I0428 18:31:04.312988    5100 command_runner.go:130] >  apiServer:
	I0428 18:31:04.312988    5100 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.27.231.169"]
	I0428 18:31:04.312988    5100 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.27.239.170"]
	I0428 18:31:04.312988    5100 command_runner.go:130] >    extraArgs:
	I0428 18:31:04.312988    5100 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0428 18:31:04.313995    5100 command_runner.go:130] >  controllerManager:
	I0428 18:31:04.313995    5100 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.27.231.169
	+  advertiseAddress: 172.27.239.170
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-788600"
	   kubeletExtraArgs:
	-    node-ip: 172.27.231.169
	+    node-ip: 172.27.239.170
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.27.231.169"]
	+  certSANs: ["127.0.0.1", "localhost", "172.27.239.170"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0428 18:31:04.313995    5100 kubeadm.go:1154] stopping kube-system containers ...
	I0428 18:31:04.322985    5100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 18:31:04.353225    5100 command_runner.go:130] > 64e6fcf4a3f2
	I0428 18:31:04.353225    5100 command_runner.go:130] > 16ea9b9acd26
	I0428 18:31:04.353225    5100 command_runner.go:130] > 20d6a18478fc
	I0428 18:31:04.353225    5100 command_runner.go:130] > 70af634f6134
	I0428 18:31:04.353225    5100 command_runner.go:130] > 33e59494d8be
	I0428 18:31:04.353225    5100 command_runner.go:130] > 8542b2c39cf5
	I0428 18:31:04.353225    5100 command_runner.go:130] > 776d075f3716
	I0428 18:31:04.353225    5100 command_runner.go:130] > d1342e9d7111
	I0428 18:31:04.353225    5100 command_runner.go:130] > d55fefd692cf
	I0428 18:31:04.353225    5100 command_runner.go:130] > e148c0cdbae0
	I0428 18:31:04.353225    5100 command_runner.go:130] > edb2c636ad5d
	I0428 18:31:04.353225    5100 command_runner.go:130] > 27388b03fb26
	I0428 18:31:04.353225    5100 command_runner.go:130] > 038a267a1caf
	I0428 18:31:04.353225    5100 command_runner.go:130] > 9ffe1b8b41e4
	I0428 18:31:04.353225    5100 command_runner.go:130] > 8328e1b41d78
	I0428 18:31:04.353225    5100 command_runner.go:130] > 26381d4606b5
	I0428 18:31:04.354491    5100 docker.go:483] Stopping containers: [64e6fcf4a3f2 16ea9b9acd26 20d6a18478fc 70af634f6134 33e59494d8be 8542b2c39cf5 776d075f3716 d1342e9d7111 d55fefd692cf e148c0cdbae0 edb2c636ad5d 27388b03fb26 038a267a1caf 9ffe1b8b41e4 8328e1b41d78 26381d4606b5]
	I0428 18:31:04.364390    5100 ssh_runner.go:195] Run: docker stop 64e6fcf4a3f2 16ea9b9acd26 20d6a18478fc 70af634f6134 33e59494d8be 8542b2c39cf5 776d075f3716 d1342e9d7111 d55fefd692cf e148c0cdbae0 edb2c636ad5d 27388b03fb26 038a267a1caf 9ffe1b8b41e4 8328e1b41d78 26381d4606b5
	I0428 18:31:04.397389    5100 command_runner.go:130] > 64e6fcf4a3f2
	I0428 18:31:04.397389    5100 command_runner.go:130] > 16ea9b9acd26
	I0428 18:31:04.397539    5100 command_runner.go:130] > 20d6a18478fc
	I0428 18:31:04.397539    5100 command_runner.go:130] > 70af634f6134
	I0428 18:31:04.397539    5100 command_runner.go:130] > 33e59494d8be
	I0428 18:31:04.397539    5100 command_runner.go:130] > 8542b2c39cf5
	I0428 18:31:04.397539    5100 command_runner.go:130] > 776d075f3716
	I0428 18:31:04.397539    5100 command_runner.go:130] > d1342e9d7111
	I0428 18:31:04.397539    5100 command_runner.go:130] > d55fefd692cf
	I0428 18:31:04.397619    5100 command_runner.go:130] > e148c0cdbae0
	I0428 18:31:04.397619    5100 command_runner.go:130] > edb2c636ad5d
	I0428 18:31:04.397619    5100 command_runner.go:130] > 27388b03fb26
	I0428 18:31:04.397619    5100 command_runner.go:130] > 038a267a1caf
	I0428 18:31:04.397619    5100 command_runner.go:130] > 9ffe1b8b41e4
	I0428 18:31:04.397619    5100 command_runner.go:130] > 8328e1b41d78
	I0428 18:31:04.397619    5100 command_runner.go:130] > 26381d4606b5
	I0428 18:31:04.410385    5100 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0428 18:31:04.456046    5100 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 18:31:04.472006    5100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0428 18:31:04.472006    5100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0428 18:31:04.472993    5100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0428 18:31:04.472993    5100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 18:31:04.472993    5100 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 18:31:04.472993    5100 kubeadm.go:156] found existing configuration files:
	
	I0428 18:31:04.484113    5100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 18:31:04.499059    5100 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 18:31:04.499059    5100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 18:31:04.510719    5100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 18:31:04.543169    5100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 18:31:04.557731    5100 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 18:31:04.558863    5100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 18:31:04.571495    5100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 18:31:04.601871    5100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 18:31:04.617538    5100 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 18:31:04.617538    5100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 18:31:04.633328    5100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 18:31:04.666719    5100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 18:31:04.682759    5100 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 18:31:04.682759    5100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 18:31:04.694102    5100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 18:31:04.724740    5100 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 18:31:04.743715    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:05.046800    5100 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0428 18:31:05.047042    5100 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0428 18:31:05.047042    5100 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0428 18:31:05.047042    5100 command_runner.go:130] > [certs] Using the existing "sa" key
	I0428 18:31:05.047042    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:05.789073    5100 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 18:31:05.789073    5100 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 18:31:05.789073    5100 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 18:31:05.789220    5100 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 18:31:05.789220    5100 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 18:31:05.789220    5100 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 18:31:05.789220    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:06.089406    5100 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 18:31:06.089521    5100 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 18:31:06.089521    5100 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0428 18:31:06.089521    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:06.200973    5100 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 18:31:06.200973    5100 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 18:31:06.200973    5100 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 18:31:06.200973    5100 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 18:31:06.200973    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:06.335221    5100 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 18:31:06.335297    5100 api_server.go:52] waiting for apiserver process to appear ...
	I0428 18:31:06.352189    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:06.860779    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:07.355397    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:07.859488    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:08.350929    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:08.376581    5100 command_runner.go:130] > 1873
	I0428 18:31:08.377248    5100 api_server.go:72] duration metric: took 2.0419465s to wait for apiserver process to appear ...
	I0428 18:31:08.377378    5100 api_server.go:88] waiting for apiserver healthz status ...
	I0428 18:31:08.377378    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:11.562154    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0428 18:31:11.562345    5100 api_server.go:103] status: https://172.27.239.170:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0428 18:31:11.562345    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:11.666889    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0428 18:31:11.667094    5100 api_server.go:103] status: https://172.27.239.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0428 18:31:11.892596    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:11.900932    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0428 18:31:11.900932    5100 api_server.go:103] status: https://172.27.239.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0428 18:31:12.378092    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:12.393638    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0428 18:31:12.393764    5100 api_server.go:103] status: https://172.27.239.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0428 18:31:12.886799    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:12.898497    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0428 18:31:12.898581    5100 api_server.go:103] status: https://172.27.239.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0428 18:31:13.392663    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:13.399821    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 200:
	ok
	I0428 18:31:13.400894    5100 round_trippers.go:463] GET https://172.27.239.170:8443/version
	I0428 18:31:13.400978    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:13.400978    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:13.400978    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:13.412818    5100 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0428 18:31:13.412818    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:13.412818    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:13 GMT
	I0428 18:31:13.412818    5100 round_trippers.go:580]     Audit-Id: b0a79bb7-8b25-46f1-b283-4f71e13e3f94
	I0428 18:31:13.412818    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:13.412818    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:13.412818    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:13.412818    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:13.412818    5100 round_trippers.go:580]     Content-Length: 263
	I0428 18:31:13.412818    5100 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0428 18:31:13.412818    5100 api_server.go:141] control plane version: v1.30.0
	I0428 18:31:13.412818    5100 api_server.go:131] duration metric: took 5.0354284s to wait for apiserver health ...
	I0428 18:31:13.412818    5100 cni.go:84] Creating CNI manager for ""
	I0428 18:31:13.412818    5100 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0428 18:31:13.417869    5100 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0428 18:31:13.436044    5100 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0428 18:31:13.445362    5100 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0428 18:31:13.445362    5100 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0428 18:31:13.445362    5100 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0428 18:31:13.445505    5100 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0428 18:31:13.445505    5100 command_runner.go:130] > Access: 2024-04-29 01:29:43.865545900 +0000
	I0428 18:31:13.445555    5100 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0428 18:31:13.445631    5100 command_runner.go:130] > Change: 2024-04-28 18:29:34.726000000 +0000
	I0428 18:31:13.445631    5100 command_runner.go:130] >  Birth: -
	I0428 18:31:13.445951    5100 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0428 18:31:13.445951    5100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0428 18:31:13.547488    5100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0428 18:31:14.632537    5100 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0428 18:31:14.632691    5100 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0428 18:31:14.632691    5100 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0428 18:31:14.632718    5100 command_runner.go:130] > daemonset.apps/kindnet configured
	I0428 18:31:14.632809    5100 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.0852276s)
	I0428 18:31:14.632965    5100 system_pods.go:43] waiting for kube-system pods to appear ...
	I0428 18:31:14.633166    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:14.633166    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:14.633166    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:14.633166    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:14.639871    5100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:31:14.639871    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:14.640274    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:14.640274    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:14.640274    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:14.640274    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:14 GMT
	I0428 18:31:14.640274    5100 round_trippers.go:580]     Audit-Id: 248bcd12-c9b2-4c03-974b-33681c1e3b65
	I0428 18:31:14.640274    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:14.642794    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1806"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87778 chars]
	I0428 18:31:14.649754    5100 system_pods.go:59] 12 kube-system pods found
	I0428 18:31:14.650290    5100 system_pods.go:61] "coredns-7db6d8ff4d-rp2lx" [d6f6f38d-f1f3-454e-a469-c76c8fbc5d99] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0428 18:31:14.650290    5100 system_pods.go:61] "etcd-multinode-788600" [f87bd4ae-4a5c-4587-a9e8-d381c5b76c63] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0428 18:31:14.650290    5100 system_pods.go:61] "kindnet-52rrh" [49c6b5f0-286f-4bff-b719-d73a4ea4aaf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0428 18:31:14.650290    5100 system_pods.go:61] "kindnet-hnvm4" [d01265be-d3ee-47dc-9d72-fd68a6a6eacd] Running
	I0428 18:31:14.650290    5100 system_pods.go:61] "kindnet-ms872" [9dffcd3e-2cc0-414f-a465-fe37b80ad4bc] Running
	I0428 18:31:14.650290    5100 system_pods.go:61] "kube-apiserver-multinode-788600" [5ade8d95-5387-4444-95af-604116cf695e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0428 18:31:14.650462    5100 system_pods.go:61] "kube-controller-manager-multinode-788600" [b7d7893e-bd95-4f96-879f-a8378040fc03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0428 18:31:14.650646    5100 system_pods.go:61] "kube-proxy-bkkql" [eccd7725-151c-4770-b99c-cb308b31389c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0428 18:31:14.650646    5100 system_pods.go:61] "kube-proxy-kc8c4" [340b4c9b-449f-4208-846e-dec867826bf7] Running
	I0428 18:31:14.650646    5100 system_pods.go:61] "kube-proxy-sjsfc" [f06aadb7-e646-4105-af2f-0acc4a8ad174] Running
	I0428 18:31:14.650646    5100 system_pods.go:61] "kube-scheduler-multinode-788600" [55bd2888-a3b6-498a-9352-8b15bcc5e545] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0428 18:31:14.650646    5100 system_pods.go:61] "storage-provisioner" [04bc447a-c711-4c23-ad4b-db5fd32b28d2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0428 18:31:14.650646    5100 system_pods.go:74] duration metric: took 17.6807ms to wait for pod list to return data ...
	I0428 18:31:14.650646    5100 node_conditions.go:102] verifying NodePressure condition ...
	I0428 18:31:14.650646    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes
	I0428 18:31:14.650646    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:14.650646    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:14.650646    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:14.657389    5100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:31:14.657389    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:14.657389    5100 round_trippers.go:580]     Audit-Id: 537b24cc-1bc6-426b-ba20-af82c6e285ac
	I0428 18:31:14.657389    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:14.657389    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:14.657389    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:14.657389    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:14.657389    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:14 GMT
	I0428 18:31:14.657389    5100 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1806"},"items":[{"metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15630 chars]
	I0428 18:31:14.659404    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:14.659404    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:14.659404    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:14.659404    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:14.659404    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:14.659404    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:14.659404    5100 node_conditions.go:105] duration metric: took 8.7579ms to run NodePressure ...
	I0428 18:31:14.659404    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:15.095181    5100 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0428 18:31:15.095181    5100 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0428 18:31:15.096193    5100 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0428 18:31:15.096193    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0428 18:31:15.096193    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.096193    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.096193    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.136172    5100 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I0428 18:31:15.136172    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.136172    5100 round_trippers.go:580]     Audit-Id: 65742097-3ca7-436d-bc20-f699a73df0d7
	I0428 18:31:15.136172    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.136172    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.136172    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.136172    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.136172    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.138207    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1812"},"items":[{"metadata":{"name":"etcd-multinode-788600","namespace":"kube-system","uid":"f87bd4ae-4a5c-4587-a9e8-d381c5b76c63","resourceVersion":"1757","creationTimestamp":"2024-04-29T01:31:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.239.170:2379","kubernetes.io/config.hash":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.mirror":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.seen":"2024-04-29T01:31:06.337700959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:31:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0428 18:31:15.139771    5100 kubeadm.go:733] kubelet initialised
	I0428 18:31:15.139771    5100 kubeadm.go:734] duration metric: took 43.5779ms waiting for restarted kubelet to initialise ...
	I0428 18:31:15.139771    5100 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 18:31:15.139771    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:15.139771    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.139771    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.139771    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.145356    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:15.145950    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.145950    5100 round_trippers.go:580]     Audit-Id: 459a1c96-348d-496d-84c8-66eff19f8b17
	I0428 18:31:15.145950    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.145950    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.145950    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.145950    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.146022    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.147048    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1812"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87185 chars]
	I0428 18:31:15.149647    5100 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.150653    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:15.150653    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.150653    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.150653    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.153647    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.153647    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.154132    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.154132    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.154132    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.154132    5100 round_trippers.go:580]     Audit-Id: 00fb04df-3abb-4699-8d39-aaed3f0c4562
	I0428 18:31:15.154132    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.154132    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.154369    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:15.154928    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.155000    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.155000    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.155000    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.157642    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.157847    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.157847    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.157847    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.157847    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.157847    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.157847    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.157847    5100 round_trippers.go:580]     Audit-Id: fe9b308f-e86b-4f3b-bb28-83392d7f2e48
	I0428 18:31:15.158186    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:15.158691    5100 pod_ready.go:97] node "multinode-788600" hosting pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.158973    5100 pod_ready.go:81] duration metric: took 9.3258ms for pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:15.158973    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.158973    5100 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.159057    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-788600
	I0428 18:31:15.159127    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.159127    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.159127    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.171183    5100 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0428 18:31:15.171183    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.171183    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.171183    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.171183    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.171183    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.171183    5100 round_trippers.go:580]     Audit-Id: 9e8d3a67-7fc6-44da-a4ab-4c3bf297d313
	I0428 18:31:15.171183    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.171183    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-788600","namespace":"kube-system","uid":"f87bd4ae-4a5c-4587-a9e8-d381c5b76c63","resourceVersion":"1757","creationTimestamp":"2024-04-29T01:31:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.239.170:2379","kubernetes.io/config.hash":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.mirror":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.seen":"2024-04-29T01:31:06.337700959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:31:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0428 18:31:15.171183    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.172154    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.172154    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.172154    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.174165    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.174603    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.174603    5100 round_trippers.go:580]     Audit-Id: 58fceb9c-2f26-4fda-8c21-03ed3aef01a5
	I0428 18:31:15.174603    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.174603    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.174603    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.174603    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.174603    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.175234    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:15.175376    5100 pod_ready.go:97] node "multinode-788600" hosting pod "etcd-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.175376    5100 pod_ready.go:81] duration metric: took 16.403ms for pod "etcd-multinode-788600" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:15.175376    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "etcd-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.175376    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.175376    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-788600
	I0428 18:31:15.175376    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.175376    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.175376    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.177956    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.178891    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.178891    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.178891    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.178891    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.178891    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.178891    5100 round_trippers.go:580]     Audit-Id: cc23e9ad-96dd-439b-a430-a3c689751251
	I0428 18:31:15.179004    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.179113    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-788600","namespace":"kube-system","uid":"5ade8d95-5387-4444-95af-604116cf695e","resourceVersion":"1754","creationTimestamp":"2024-04-29T01:31:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.239.170:8443","kubernetes.io/config.hash":"e1f1ff8c6e0ecb526bd6baa448e7335e","kubernetes.io/config.mirror":"e1f1ff8c6e0ecb526bd6baa448e7335e","kubernetes.io/config.seen":"2024-04-29T01:31:06.268742128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:31:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0428 18:31:15.179786    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.179786    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.179877    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.179877    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.182704    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.182896    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.182896    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.182896    5100 round_trippers.go:580]     Audit-Id: c3ba53e4-8df9-4d4e-bda5-185d6c10f77f
	I0428 18:31:15.182896    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.182896    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.182896    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.182896    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.182896    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:15.183632    5100 pod_ready.go:97] node "multinode-788600" hosting pod "kube-apiserver-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.183632    5100 pod_ready.go:81] duration metric: took 8.2563ms for pod "kube-apiserver-multinode-788600" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:15.183632    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "kube-apiserver-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.183632    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.183820    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:15.183820    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.183820    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.183820    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.186501    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.186501    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.186939    5100 round_trippers.go:580]     Audit-Id: 99893935-fb21-420c-9cff-c20de7ccb907
	I0428 18:31:15.186939    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.186939    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.186939    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.186939    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.186939    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.187313    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:15.188091    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.188091    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.188091    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.188091    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.190500    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.190500    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.190500    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.190500    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.190500    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.190500    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.190500    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.190500    5100 round_trippers.go:580]     Audit-Id: 7f56dd45-7d68-462a-a53e-5a85e89ccc57
	I0428 18:31:15.190500    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:15.191494    5100 pod_ready.go:97] node "multinode-788600" hosting pod "kube-controller-manager-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.191494    5100 pod_ready.go:81] duration metric: took 7.7784ms for pod "kube-controller-manager-multinode-788600" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:15.191494    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "kube-controller-manager-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.191494    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bkkql" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.306676    5100 request.go:629] Waited for 114.7847ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bkkql
	I0428 18:31:15.306676    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bkkql
	I0428 18:31:15.306676    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.306676    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.306676    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.310457    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:15.311284    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.311284    5100 round_trippers.go:580]     Audit-Id: 103130c2-ca49-4b4a-92e6-5d0ccc0d6407
	I0428 18:31:15.311284    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.311284    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.311284    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.311284    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.311284    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.311284    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bkkql","generateName":"kube-proxy-","namespace":"kube-system","uid":"eccd7725-151c-4770-b99c-cb308b31389c","resourceVersion":"1811","creationTimestamp":"2024-04-29T01:09:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0428 18:31:15.508336    5100 request.go:629] Waited for 195.9795ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.508605    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.508605    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.508651    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.508667    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.512169    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:15.512169    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.512169    5100 round_trippers.go:580]     Audit-Id: 6961d0a4-358e-4e41-aa67-2f2730d6f3ff
	I0428 18:31:15.512169    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.512169    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.512464    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.512464    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.512464    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.512718    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:15.513623    5100 pod_ready.go:97] node "multinode-788600" hosting pod "kube-proxy-bkkql" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.513623    5100 pod_ready.go:81] duration metric: took 322.1279ms for pod "kube-proxy-bkkql" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:15.513623    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "kube-proxy-bkkql" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.513623    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kc8c4" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.696409    5100 request.go:629] Waited for 182.6614ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kc8c4
	I0428 18:31:15.696609    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kc8c4
	I0428 18:31:15.696609    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.696609    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.696609    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.700367    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:15.700367    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.701342    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.701342    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.701342    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.701342    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.701342    5100 round_trippers.go:580]     Audit-Id: 20e27f84-22b7-47b4-a097-76936ffa5a07
	I0428 18:31:15.701342    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.701658    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kc8c4","generateName":"kube-proxy-","namespace":"kube-system","uid":"340b4c9b-449f-4208-846e-dec867826bf7","resourceVersion":"625","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0428 18:31:15.900703    5100 request.go:629] Waited for 198.0923ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:31:15.900822    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:31:15.900822    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.900822    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.900822    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.909119    5100 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0428 18:31:15.909119    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.909119    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.909119    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.909119    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.909119    5100 round_trippers.go:580]     Audit-Id: d0c1002e-a1b6-497f-892e-ddd3c4c172ec
	I0428 18:31:15.909119    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.909119    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.909119    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"1353","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3827 chars]
	I0428 18:31:15.910040    5100 pod_ready.go:92] pod "kube-proxy-kc8c4" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:15.910040    5100 pod_ready.go:81] duration metric: took 396.4162ms for pod "kube-proxy-kc8c4" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.910040    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sjsfc" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:16.102000    5100 request.go:629] Waited for 191.7654ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjsfc
	I0428 18:31:16.102255    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjsfc
	I0428 18:31:16.102255    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:16.102255    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:16.102255    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:16.105969    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:16.107006    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:16.107038    5100 round_trippers.go:580]     Audit-Id: 855ecca8-d4e6-430b-aa3c-4558037042ca
	I0428 18:31:16.107038    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:16.107038    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:16.107038    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:16.107038    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:16.107038    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:16 GMT
	I0428 18:31:16.107379    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sjsfc","generateName":"kube-proxy-","namespace":"kube-system","uid":"f06aadb7-e646-4105-af2f-0acc4a8ad174","resourceVersion":"1698","creationTimestamp":"2024-04-29T01:16:25Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:16:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0428 18:31:16.306385    5100 request.go:629] Waited for 198.1483ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m03
	I0428 18:31:16.306425    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m03
	I0428 18:31:16.306425    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:16.306425    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:16.306425    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:16.310172    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:16.311023    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:16.311023    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:16.311023    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:16.311023    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:16.311023    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:16.311023    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:16 GMT
	I0428 18:31:16.311096    5100 round_trippers.go:580]     Audit-Id: 0f268a7f-8c37-4653-86df-96846cc991d3
	I0428 18:31:16.311337    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m03","uid":"d68977ad-af85-4957-85dc-4ad584113d26","resourceVersion":"1709","creationTimestamp":"2024-04-29T01:26:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_26_47_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:26:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0428 18:31:16.311937    5100 pod_ready.go:97] node "multinode-788600-m03" hosting pod "kube-proxy-sjsfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600-m03" has status "Ready":"Unknown"
	I0428 18:31:16.311937    5100 pod_ready.go:81] duration metric: took 401.8965ms for pod "kube-proxy-sjsfc" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:16.311937    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600-m03" hosting pod "kube-proxy-sjsfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600-m03" has status "Ready":"Unknown"
	I0428 18:31:16.311937    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:16.509495    5100 request.go:629] Waited for 197.318ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-788600
	I0428 18:31:16.509644    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-788600
	I0428 18:31:16.509724    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:16.509724    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:16.509762    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:16.512765    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:16.513186    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:16.513186    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:16.513186    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:16.513186    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:16 GMT
	I0428 18:31:16.513186    5100 round_trippers.go:580]     Audit-Id: 43c41b94-99b3-45b3-823c-f7e75c2eefbe
	I0428 18:31:16.513186    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:16.513186    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:16.513458    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-788600","namespace":"kube-system","uid":"55bd2888-a3b6-498a-9352-8b15bcc5e545","resourceVersion":"1769","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"efaaef68fa82e79d9434de065e5d40a6","kubernetes.io/config.mirror":"efaaef68fa82e79d9434de065e5d40a6","kubernetes.io/config.seen":"2024-04-29T01:08:48.885071033Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0428 18:31:16.700515    5100 request.go:629] Waited for 186.1649ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:16.700515    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:16.700515    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:16.700515    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:16.700515    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:16.704023    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:16.705037    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:16.705077    5100 round_trippers.go:580]     Audit-Id: ee4d4b15-72df-4e5c-86f4-5490ccc9a289
	I0428 18:31:16.705077    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:16.705077    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:16.705077    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:16.705077    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:16.705077    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:16 GMT
	I0428 18:31:16.705222    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:16.705767    5100 pod_ready.go:97] node "multinode-788600" hosting pod "kube-scheduler-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:16.705924    5100 pod_ready.go:81] duration metric: took 393.9853ms for pod "kube-scheduler-multinode-788600" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:16.705924    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "kube-scheduler-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:16.705924    5100 pod_ready.go:38] duration metric: took 1.566149s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 18:31:16.705924    5100 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 18:31:16.724721    5100 command_runner.go:130] > -16
	I0428 18:31:16.725018    5100 ops.go:34] apiserver oom_adj: -16
	I0428 18:31:16.725018    5100 kubeadm.go:591] duration metric: took 12.4909983s to restartPrimaryControlPlane
	I0428 18:31:16.725018    5100 kubeadm.go:393] duration metric: took 12.5567953s to StartCluster
	I0428 18:31:16.725018    5100 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:16.725018    5100 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:31:16.726568    5100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:16.727966    5100 start.go:234] Will wait 6m0s for node &{Name: IP:172.27.239.170 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 18:31:16.727966    5100 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0428 18:31:16.732826    5100 out.go:177] * Verifying Kubernetes components...
	I0428 18:31:16.728603    5100 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:31:16.737476    5100 out.go:177] * Enabled addons: 
	I0428 18:31:16.742152    5100 addons.go:505] duration metric: took 14.1858ms for enable addons: enabled=[]
	I0428 18:31:16.751296    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:31:17.008730    5100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 18:31:17.039776    5100 node_ready.go:35] waiting up to 6m0s for node "multinode-788600" to be "Ready" ...
	I0428 18:31:17.040103    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:17.040103    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.040146    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.040172    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.043764    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:17.043764    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.043764    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.043764    5100 round_trippers.go:580]     Audit-Id: a8273f55-9742-4a3a-93b9-eca47c09292d
	I0428 18:31:17.043764    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.043764    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.043764    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.043764    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.044784    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:17.044784    5100 node_ready.go:49] node "multinode-788600" has status "Ready":"True"
	I0428 18:31:17.044784    5100 node_ready.go:38] duration metric: took 4.9181ms for node "multinode-788600" to be "Ready" ...
	I0428 18:31:17.044784    5100 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 18:31:17.109075    5100 request.go:629] Waited for 64.0491ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:17.109310    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:17.109310    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.109310    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.109310    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.114919    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:17.115371    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.115427    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.115427    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.115427    5100 round_trippers.go:580]     Audit-Id: 006c7d51-eccd-4506-a698-005b0daa1d0b
	I0428 18:31:17.115427    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.115427    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.115427    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.116826    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1817"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87185 chars]
	I0428 18:31:17.120579    5100 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:17.297742    5100 request.go:629] Waited for 177.1623ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:17.297742    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:17.297742    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.297742    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.297742    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.301521    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:17.301521    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.301521    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.301521    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.301521    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.301521    5100 round_trippers.go:580]     Audit-Id: 8e66d5c6-ec9a-4aa3-9b06-d540afe60889
	I0428 18:31:17.301521    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.301521    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.302710    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:17.499470    5100 request.go:629] Waited for 195.8663ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:17.499470    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:17.499470    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.499470    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.499470    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.503650    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:17.503650    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.503650    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.503650    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.503755    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.503755    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.503755    5100 round_trippers.go:580]     Audit-Id: 81db7e77-99aa-4860-9e04-b6ee3d7ee5e6
	I0428 18:31:17.503755    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.504045    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:17.703045    5100 request.go:629] Waited for 78.0265ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:17.703158    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:17.703158    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.703158    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.703158    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.708829    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:17.709368    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.709443    5100 round_trippers.go:580]     Audit-Id: 5590ba60-674b-44c2-82f1-0b5501385170
	I0428 18:31:17.709443    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.709443    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.709443    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.709443    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.709443    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.709717    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:17.907071    5100 request.go:629] Waited for 196.8197ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:17.907260    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:17.907260    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.907260    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.907260    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.912062    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:17.912062    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.912062    5100 round_trippers.go:580]     Audit-Id: 7eb648bb-2c0e-4586-8efc-8ed163da53ce
	I0428 18:31:17.912062    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.912062    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.912062    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.912062    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.912062    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.912062    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:18.125074    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:18.125176    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:18.125176    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:18.125176    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:18.130106    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:18.130391    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:18.130455    5100 round_trippers.go:580]     Audit-Id: 16906fd6-6d66-4bc7-9365-56443fcce4da
	I0428 18:31:18.130455    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:18.130455    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:18.130455    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:18.130455    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:18.130455    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:18 GMT
	I0428 18:31:18.130455    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:18.297115    5100 request.go:629] Waited for 165.6205ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:18.297115    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:18.297115    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:18.297115    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:18.297115    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:18.301050    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:18.301050    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:18.301050    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:18.301050    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:18.301771    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:18.301771    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:18.301771    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:18 GMT
	I0428 18:31:18.301771    5100 round_trippers.go:580]     Audit-Id: 493da01f-28a4-469a-b479-0e5c634dcda6
	I0428 18:31:18.302106    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:18.623750    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:18.623750    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:18.623884    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:18.623884    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:18.627295    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:18.627295    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:18.627295    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:18.628185    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:18 GMT
	I0428 18:31:18.628185    5100 round_trippers.go:580]     Audit-Id: 0158c0f7-3b76-4cc8-88e6-20a75e3a14a6
	I0428 18:31:18.628185    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:18.628185    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:18.628185    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:18.628287    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:18.701220    5100 request.go:629] Waited for 71.8291ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:18.701447    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:18.701447    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:18.701447    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:18.701447    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:18.705727    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:18.706655    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:18.706655    5100 round_trippers.go:580]     Audit-Id: 857798b2-ed0f-4456-ac6b-802e8e992d5a
	I0428 18:31:18.706655    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:18.706716    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:18.706716    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:18.706716    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:18.706716    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:18 GMT
	I0428 18:31:18.707322    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:19.125144    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:19.125458    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:19.125458    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:19.125458    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:19.129851    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:19.129851    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:19.130436    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:19.130436    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:19 GMT
	I0428 18:31:19.130436    5100 round_trippers.go:580]     Audit-Id: e680eb2e-fdde-4f45-8785-96cc96451ae4
	I0428 18:31:19.130436    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:19.130436    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:19.130436    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:19.130645    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:19.131464    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:19.131464    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:19.131464    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:19.131539    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:19.135413    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:19.135592    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:19.135592    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:19.135592    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:19 GMT
	I0428 18:31:19.135592    5100 round_trippers.go:580]     Audit-Id: 94b7ce89-e9f4-4224-84b3-b2a746aed8d9
	I0428 18:31:19.135592    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:19.135592    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:19.135592    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:19.136057    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:19.136636    5100 pod_ready.go:102] pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace has status "Ready":"False"
	I0428 18:31:19.625365    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:19.625365    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:19.625365    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:19.625365    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:19.629585    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:19.630350    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:19.630350    5100 round_trippers.go:580]     Audit-Id: 9a54d882-18e4-412a-95e9-2944c7341b61
	I0428 18:31:19.630350    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:19.630350    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:19.630350    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:19.630350    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:19.630350    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:19 GMT
	I0428 18:31:19.631010    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:19.631732    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:19.631732    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:19.631732    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:19.631732    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:19.634764    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:19.635282    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:19.635282    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:19.635282    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:19.635282    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:19.635282    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:19 GMT
	I0428 18:31:19.635282    5100 round_trippers.go:580]     Audit-Id: ce01b556-8310-4cd0-97b1-00048e3ce5ef
	I0428 18:31:19.635367    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:19.635644    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:20.125337    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:20.125563    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:20.125563    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:20.125563    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:20.130243    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:20.130243    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:20.130340    5100 round_trippers.go:580]     Audit-Id: d66e8822-e755-4521-8c73-cf13c831f445
	I0428 18:31:20.130340    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:20.130340    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:20.130340    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:20.130340    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:20.130340    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:20 GMT
	I0428 18:31:20.130550    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:20.131365    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:20.131365    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:20.131365    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:20.131365    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:20.135405    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:20.135608    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:20.135608    5100 round_trippers.go:580]     Audit-Id: 45cd6471-74c5-4493-b702-d89fd8d35e5d
	I0428 18:31:20.135608    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:20.135608    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:20.135608    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:20.135608    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:20.135608    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:20 GMT
	I0428 18:31:20.136101    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:20.634410    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:20.634488    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:20.634488    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:20.634557    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:20.637052    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:20.637426    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:20.637541    5100 round_trippers.go:580]     Audit-Id: ab177460-eb95-46f5-a35e-f25819254aeb
	I0428 18:31:20.637541    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:20.637541    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:20.637541    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:20.637541    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:20.637541    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:20 GMT
	I0428 18:31:20.637794    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:20.638636    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:20.638636    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:20.638636    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:20.638695    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:20.641492    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:20.641556    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:20.641556    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:20 GMT
	I0428 18:31:20.641556    5100 round_trippers.go:580]     Audit-Id: ed017748-8a58-4062-9bb8-e81c00b3cba6
	I0428 18:31:20.641556    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:20.641624    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:20.641624    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:20.641624    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:20.641935    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:21.127928    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:21.127928    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.127928    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.127928    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.132962    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:21.133357    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.133357    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.133357    5100 round_trippers.go:580]     Audit-Id: 1230feb4-c38f-4839-9a95-4f3d25a63a95
	I0428 18:31:21.133430    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.133430    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.133430    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.133430    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.133643    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:21.134444    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:21.134444    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.134444    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.134444    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.140109    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:21.140391    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.140391    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.140492    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.140492    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.140492    5100 round_trippers.go:580]     Audit-Id: 0db692a7-5837-417e-8d92-b8c244e93eee
	I0428 18:31:21.140492    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.140492    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.140806    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:21.141367    5100 pod_ready.go:102] pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace has status "Ready":"False"
	I0428 18:31:21.633646    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:21.633743    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.633743    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.633743    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.637104    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:21.638230    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.638230    5100 round_trippers.go:580]     Audit-Id: f68ff9c4-1dfd-405f-a796-cc57177a2633
	I0428 18:31:21.638230    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.638230    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.638230    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.638230    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.638230    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.638622    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1831","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0428 18:31:21.639344    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:21.639415    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.639415    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.639415    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.642703    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:21.642882    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.642882    5100 round_trippers.go:580]     Audit-Id: 156045d7-ea62-439c-a5a2-764198fcf8fc
	I0428 18:31:21.642882    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.642882    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.642882    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.642882    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.642882    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.643283    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:21.643782    5100 pod_ready.go:92] pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:21.643853    5100 pod_ready.go:81] duration metric: took 4.5231918s for pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:21.643853    5100 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:21.644054    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-788600
	I0428 18:31:21.644110    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.644110    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.644110    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.646053    5100 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0428 18:31:21.646894    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.646894    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.646894    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.646894    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.646894    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.646894    5100 round_trippers.go:580]     Audit-Id: f35af6a5-cb54-4f3a-a859-d4268c14877e
	I0428 18:31:21.646894    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.647187    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-788600","namespace":"kube-system","uid":"f87bd4ae-4a5c-4587-a9e8-d381c5b76c63","resourceVersion":"1828","creationTimestamp":"2024-04-29T01:31:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.239.170:2379","kubernetes.io/config.hash":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.mirror":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.seen":"2024-04-29T01:31:06.337700959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:31:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0428 18:31:21.647739    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:21.647739    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.647739    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.647739    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.650311    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:21.650311    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.650311    5100 round_trippers.go:580]     Audit-Id: 2caa71c7-c1b8-47dc-9700-df9b0410bb56
	I0428 18:31:21.650311    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.650311    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.650502    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.650502    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.650502    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.650685    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:21.650685    5100 pod_ready.go:92] pod "etcd-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:21.650685    5100 pod_ready.go:81] duration metric: took 6.8321ms for pod "etcd-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:21.650685    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:21.710990    5100 request.go:629] Waited for 60.172ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-788600
	I0428 18:31:21.711066    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-788600
	I0428 18:31:21.711066    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.711066    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.711066    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.714561    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:21.714561    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.715037    5100 round_trippers.go:580]     Audit-Id: fc8e88b9-66f9-4898-9ff1-4315cda3ab66
	I0428 18:31:21.715037    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.715037    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.715037    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.715037    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.715037    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.715299    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-788600","namespace":"kube-system","uid":"5ade8d95-5387-4444-95af-604116cf695e","resourceVersion":"1819","creationTimestamp":"2024-04-29T01:31:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.239.170:8443","kubernetes.io/config.hash":"e1f1ff8c6e0ecb526bd6baa448e7335e","kubernetes.io/config.mirror":"e1f1ff8c6e0ecb526bd6baa448e7335e","kubernetes.io/config.seen":"2024-04-29T01:31:06.268742128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:31:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0428 18:31:21.897294    5100 request.go:629] Waited for 181.1138ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:21.897451    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:21.897451    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.897451    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.897451    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.902008    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:21.902330    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.902330    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.902405    5100 round_trippers.go:580]     Audit-Id: 455e52d6-9783-4cd0-ba22-d7ced6bdbde5
	I0428 18:31:21.902474    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.902513    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.902513    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.902563    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.902731    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:21.903336    5100 pod_ready.go:92] pod "kube-apiserver-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:21.903336    5100 pod_ready.go:81] duration metric: took 252.6502ms for pod "kube-apiserver-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:21.903390    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:22.101010    5100 request.go:629] Waited for 197.3159ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:22.101123    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:22.101329    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:22.101329    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:22.101329    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:22.105803    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:22.105803    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:22.105803    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:22.105803    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:22 GMT
	I0428 18:31:22.106752    5100 round_trippers.go:580]     Audit-Id: 2718f490-3370-4fab-81d1-075ce51d9a4b
	I0428 18:31:22.106752    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:22.106752    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:22.106752    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:22.107214    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:22.302267    5100 request.go:629] Waited for 194.1916ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:22.302870    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:22.302870    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:22.302870    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:22.302870    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:22.306443    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:22.307139    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:22.307139    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:22 GMT
	I0428 18:31:22.307139    5100 round_trippers.go:580]     Audit-Id: 1859c6bd-dd6f-46f3-8023-86dfbf522bb5
	I0428 18:31:22.307139    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:22.307139    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:22.307139    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:22.307139    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:22.307433    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:22.503587    5100 request.go:629] Waited for 93.5627ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:22.503911    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:22.503911    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:22.503911    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:22.503911    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:22.508599    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:22.508599    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:22.508599    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:22.508599    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:22.508599    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:22.508599    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:22 GMT
	I0428 18:31:22.508599    5100 round_trippers.go:580]     Audit-Id: 69d38582-07ce-450b-9982-677772a19f0f
	I0428 18:31:22.508599    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:22.508599    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:22.705855    5100 request.go:629] Waited for 196.1165ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:22.706020    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:22.706020    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:22.706020    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:22.706020    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:22.710776    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:22.710776    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:22.710776    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:22 GMT
	I0428 18:31:22.710776    5100 round_trippers.go:580]     Audit-Id: b319da81-14dd-4a76-b77b-5cad9a9f0cdd
	I0428 18:31:22.710776    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:22.710776    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:22.710776    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:22.710776    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:22.711099    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:22.909509    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:22.909509    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:22.909509    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:22.909509    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:22.913239    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:22.913239    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:22.914065    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:22 GMT
	I0428 18:31:22.914065    5100 round_trippers.go:580]     Audit-Id: 90797ee4-eb66-443a-bee2-91e3160ae5a3
	I0428 18:31:22.914065    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:22.914152    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:22.914197    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:22.914197    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:22.914394    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:23.096951    5100 request.go:629] Waited for 181.6718ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:23.097189    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:23.097189    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:23.097189    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:23.097189    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:23.103361    5100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:31:23.103791    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:23.103791    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:23.103791    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:23.103791    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:23.103791    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:23 GMT
	I0428 18:31:23.103791    5100 round_trippers.go:580]     Audit-Id: 195353fb-71f8-4541-826a-8108aaac1962
	I0428 18:31:23.103791    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:23.104000    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:23.410524    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:23.410524    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:23.410524    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:23.410524    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:23.418485    5100 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0428 18:31:23.418637    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:23.418637    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:23.418637    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:23.418637    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:23.418637    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:23.418637    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:23 GMT
	I0428 18:31:23.418723    5100 round_trippers.go:580]     Audit-Id: a9965ad6-304f-4265-b0f7-4574d439bc5e
	I0428 18:31:23.418987    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:23.504617    5100 request.go:629] Waited for 84.6283ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:23.504868    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:23.504868    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:23.504908    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:23.504908    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:23.512339    5100 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0428 18:31:23.512339    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:23.512339    5100 round_trippers.go:580]     Audit-Id: 44c42d96-1347-4a1d-bb98-6efab260b0a9
	I0428 18:31:23.512339    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:23.512339    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:23.512339    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:23.512339    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:23.512339    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:23 GMT
	I0428 18:31:23.512948    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:23.912694    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:23.912694    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:23.912694    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:23.912694    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:23.916280    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:23.917051    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:23.917051    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:23.917051    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:23.917051    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:23 GMT
	I0428 18:31:23.917169    5100 round_trippers.go:580]     Audit-Id: e56e8589-fd0b-4a10-8978-88a5498adf87
	I0428 18:31:23.917169    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:23.917255    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:23.917386    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:23.918466    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:23.918466    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:23.918466    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:23.918545    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:23.920990    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:23.920990    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:23.921364    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:23.921364    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:23.921364    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:23.921364    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:23.921364    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:23 GMT
	I0428 18:31:23.921364    5100 round_trippers.go:580]     Audit-Id: 459d79fa-7fd5-458c-b59b-4aa09ca2d11f
	I0428 18:31:23.921619    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:23.921844    5100 pod_ready.go:102] pod "kube-controller-manager-multinode-788600" in "kube-system" namespace has status "Ready":"False"
	I0428 18:31:24.403813    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:24.403813    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:24.403898    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:24.403898    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:24.407347    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:24.407347    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:24.407880    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:24.407880    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:24.407880    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:24.407880    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:24 GMT
	I0428 18:31:24.407880    5100 round_trippers.go:580]     Audit-Id: 6c51501b-33a9-4f17-83a5-0d289e64f234
	I0428 18:31:24.407880    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:24.408280    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:24.409107    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:24.409107    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:24.409107    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:24.409107    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:24.418873    5100 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0428 18:31:24.418999    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:24.418999    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:24.418999    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:24.418999    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:24.418999    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:24 GMT
	I0428 18:31:24.418999    5100 round_trippers.go:580]     Audit-Id: c65fc721-9bdd-425f-884a-ac4fc9762dac
	I0428 18:31:24.418999    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:24.418999    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:24.907990    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:24.907990    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:24.907990    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:24.907990    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:24.911050    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:24.911818    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:24.911818    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:24.911818    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:24.911818    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:24 GMT
	I0428 18:31:24.911818    5100 round_trippers.go:580]     Audit-Id: b319d2c2-62a5-4196-b683-3941c10aa59c
	I0428 18:31:24.911818    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:24.911818    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:24.912137    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:24.912842    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:24.912842    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:24.912842    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:24.912842    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:24.915423    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:24.915423    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:24.915997    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:24 GMT
	I0428 18:31:24.915997    5100 round_trippers.go:580]     Audit-Id: 84f5841e-e7ee-45e3-a703-0f959c7f358a
	I0428 18:31:24.915997    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:24.915997    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:24.915997    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:24.915997    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:24.916211    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:25.406479    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:25.406479    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:25.406479    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:25.406479    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:25.410068    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:25.411003    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:25.411003    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:25.411003    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:25 GMT
	I0428 18:31:25.411003    5100 round_trippers.go:580]     Audit-Id: 65af6da8-cf58-4415-9bd1-78eb11064ed9
	I0428 18:31:25.411003    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:25.411003    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:25.411085    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:25.411437    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:25.412086    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:25.412086    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:25.412086    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:25.412086    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:25.416108    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:25.416108    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:25.416108    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:25.416108    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:25.416108    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:25.416108    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:25.416349    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:25 GMT
	I0428 18:31:25.416349    5100 round_trippers.go:580]     Audit-Id: 85a041c9-f007-4e8d-a7e5-2d480a07a6f2
	I0428 18:31:25.416451    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:25.905969    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:25.906041    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:25.906041    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:25.906041    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:25.910420    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:25.910753    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:25.910753    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:25.910753    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:25 GMT
	I0428 18:31:25.910753    5100 round_trippers.go:580]     Audit-Id: f26b3776-3168-481a-a906-dc87ef8303f5
	I0428 18:31:25.910753    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:25.910753    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:25.910753    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:25.911278    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:25.912093    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:25.912171    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:25.912243    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:25.912280    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:25.916509    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:25.916564    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:25.916564    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:25.916606    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:25.916606    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:25.916606    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:25 GMT
	I0428 18:31:25.916606    5100 round_trippers.go:580]     Audit-Id: 4e2b371e-42dc-4d12-9f9d-0c0566f49f31
	I0428 18:31:25.916652    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:25.917158    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:26.406983    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:26.407082    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.407082    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.407082    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.411527    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:26.411527    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.412073    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.412073    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.412073    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.412073    5100 round_trippers.go:580]     Audit-Id: b9d642b0-29ca-47a0-af35-12fa93ac8141
	I0428 18:31:26.412073    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.412073    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.412518    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:26.413377    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:26.413377    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.413469    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.413469    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.416937    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:26.416937    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.417604    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.417604    5100 round_trippers.go:580]     Audit-Id: 5bde9dee-4272-4b16-9ef7-cef4f1306ca7
	I0428 18:31:26.417604    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.417604    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.417604    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.417604    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.417907    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:26.418633    5100 pod_ready.go:102] pod "kube-controller-manager-multinode-788600" in "kube-system" namespace has status "Ready":"False"
	I0428 18:31:26.910803    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:26.910803    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.910803    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.910803    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.914461    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:26.915082    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.915082    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.915082    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.915082    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.915082    5100 round_trippers.go:580]     Audit-Id: cf8512b6-0c9a-49e4-b462-11a9c7c0186e
	I0428 18:31:26.915082    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.915082    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.915465    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1845","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0428 18:31:26.916199    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:26.916253    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.916253    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.916253    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.919831    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:26.919831    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.919831    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.919831    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.919831    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.919831    5100 round_trippers.go:580]     Audit-Id: eb964c52-b7a1-4dce-84d1-d5ced6289e32
	I0428 18:31:26.919831    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.919831    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.919831    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:26.920847    5100 pod_ready.go:92] pod "kube-controller-manager-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:26.920847    5100 pod_ready.go:81] duration metric: took 5.0174446s for pod "kube-controller-manager-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:26.920847    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bkkql" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:26.920847    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bkkql
	I0428 18:31:26.920847    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.920847    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.920847    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.923862    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:26.923991    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.923991    5100 round_trippers.go:580]     Audit-Id: 34326d60-61eb-4e29-9e55-3265edff4448
	I0428 18:31:26.923991    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.923991    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.923991    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.923991    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.923991    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.924328    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bkkql","generateName":"kube-proxy-","namespace":"kube-system","uid":"eccd7725-151c-4770-b99c-cb308b31389c","resourceVersion":"1811","creationTimestamp":"2024-04-29T01:09:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0428 18:31:26.925059    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:26.925157    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.925157    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.925157    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.929745    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:26.930529    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.930529    5100 round_trippers.go:580]     Audit-Id: 21fcb88b-b68a-4e51-b75f-79f6bbbc4901
	I0428 18:31:26.930529    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.930529    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.930529    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.930529    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.930529    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.930529    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:26.930529    5100 pod_ready.go:92] pod "kube-proxy-bkkql" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:26.930529    5100 pod_ready.go:81] duration metric: took 9.6822ms for pod "kube-proxy-bkkql" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:26.930529    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kc8c4" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:26.930529    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kc8c4
	I0428 18:31:26.930529    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.930529    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.930529    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.933549    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:26.933549    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.933549    5100 round_trippers.go:580]     Audit-Id: bafcc134-e6f0-426a-a801-c20dfa8ae175
	I0428 18:31:26.933549    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.933549    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.933549    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.933549    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.933549    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.933549    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kc8c4","generateName":"kube-proxy-","namespace":"kube-system","uid":"340b4c9b-449f-4208-846e-dec867826bf7","resourceVersion":"625","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0428 18:31:27.098538    5100 request.go:629] Waited for 163.8061ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:31:27.098710    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:31:27.098710    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.098710    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.098710    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.102441    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:27.102441    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.103445    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.103445    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.103445    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.103445    5100 round_trippers.go:580]     Audit-Id: fb5898ce-a6b8-4a4a-b6d5-31ad26eecf80
	I0428 18:31:27.103445    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.103520    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.105457    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"1353","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3827 chars]
	I0428 18:31:27.105457    5100 pod_ready.go:92] pod "kube-proxy-kc8c4" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:27.105457    5100 pod_ready.go:81] duration metric: took 174.9279ms for pod "kube-proxy-kc8c4" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:27.105457    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sjsfc" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:27.301745    5100 request.go:629] Waited for 195.5395ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjsfc
	I0428 18:31:27.302056    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjsfc
	I0428 18:31:27.302056    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.302056    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.302056    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.307781    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:27.307860    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.307860    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.307860    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.307926    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.307926    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.307952    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.307952    5100 round_trippers.go:580]     Audit-Id: 7efa1919-f143-4c8f-b032-2b86afdfc5a3
	I0428 18:31:27.307981    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sjsfc","generateName":"kube-proxy-","namespace":"kube-system","uid":"f06aadb7-e646-4105-af2f-0acc4a8ad174","resourceVersion":"1698","creationTimestamp":"2024-04-29T01:16:25Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:16:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0428 18:31:27.502902    5100 request.go:629] Waited for 193.7858ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m03
	I0428 18:31:27.503060    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m03
	I0428 18:31:27.503060    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.503060    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.503060    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.506683    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:27.507255    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.507255    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.507255    5100 round_trippers.go:580]     Audit-Id: 9d27db1a-1bf1-43d7-9ff4-dca89bead646
	I0428 18:31:27.507255    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.507255    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.507255    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.507255    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.507493    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m03","uid":"d68977ad-af85-4957-85dc-4ad584113d26","resourceVersion":"1842","creationTimestamp":"2024-04-29T01:26:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_26_47_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:26:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0428 18:31:27.508040    5100 pod_ready.go:97] node "multinode-788600-m03" hosting pod "kube-proxy-sjsfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600-m03" has status "Ready":"Unknown"
	I0428 18:31:27.508183    5100 pod_ready.go:81] duration metric: took 402.6814ms for pod "kube-proxy-sjsfc" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:27.508199    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600-m03" hosting pod "kube-proxy-sjsfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600-m03" has status "Ready":"Unknown"
	I0428 18:31:27.508199    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:27.706822    5100 request.go:629] Waited for 198.3375ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-788600
	I0428 18:31:27.707038    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-788600
	I0428 18:31:27.707038    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.707038    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.707038    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.710618    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:27.710618    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.710618    5100 round_trippers.go:580]     Audit-Id: 346dffd5-6ed0-444b-982a-bdfbd2984a5d
	I0428 18:31:27.710785    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.710785    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.710785    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.710785    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.710785    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.710965    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-788600","namespace":"kube-system","uid":"55bd2888-a3b6-498a-9352-8b15bcc5e545","resourceVersion":"1834","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"efaaef68fa82e79d9434de065e5d40a6","kubernetes.io/config.mirror":"efaaef68fa82e79d9434de065e5d40a6","kubernetes.io/config.seen":"2024-04-29T01:08:48.885071033Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0428 18:31:27.909797    5100 request.go:629] Waited for 197.525ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:27.910028    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:27.910109    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.910109    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.910109    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.914589    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:27.914589    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.914589    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.914589    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.914589    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.914668    5100 round_trippers.go:580]     Audit-Id: 9b5cf9aa-ca13-4191-8718-7bcc2058694f
	I0428 18:31:27.914668    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.914668    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.914843    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:27.915494    5100 pod_ready.go:92] pod "kube-scheduler-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:27.915494    5100 pod_ready.go:81] duration metric: took 407.2947ms for pod "kube-scheduler-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:27.915494    5100 pod_ready.go:38] duration metric: took 10.8706849s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 18:31:27.915494    5100 api_server.go:52] waiting for apiserver process to appear ...
	I0428 18:31:27.928493    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:27.958046    5100 command_runner.go:130] > 1873
	I0428 18:31:27.958165    5100 api_server.go:72] duration metric: took 11.2301726s to wait for apiserver process to appear ...
	I0428 18:31:27.958165    5100 api_server.go:88] waiting for apiserver healthz status ...
	I0428 18:31:27.958239    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:27.966618    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 200:
	ok
	I0428 18:31:27.967716    5100 round_trippers.go:463] GET https://172.27.239.170:8443/version
	I0428 18:31:27.967756    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.967798    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.967798    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.970713    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:27.970929    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.970929    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.970929    5100 round_trippers.go:580]     Content-Length: 263
	I0428 18:31:27.970929    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.970929    5100 round_trippers.go:580]     Audit-Id: d08eef7e-51d9-480d-801f-83d53e5365c3
	I0428 18:31:27.970929    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.970929    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.971026    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.971026    5100 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0428 18:31:27.971163    5100 api_server.go:141] control plane version: v1.30.0
	I0428 18:31:27.971195    5100 api_server.go:131] duration metric: took 12.9561ms to wait for apiserver health ...
	I0428 18:31:27.971195    5100 system_pods.go:43] waiting for kube-system pods to appear ...
	I0428 18:31:28.110224    5100 request.go:629] Waited for 138.7183ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:28.110224    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:28.110224    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:28.110224    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:28.110224    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:28.117002    5100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:31:28.117293    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:28.117293    5100 round_trippers.go:580]     Audit-Id: 35f8cbc1-51d6-4b4a-b6c5-4c6af5816f17
	I0428 18:31:28.117293    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:28.117293    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:28.117293    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:28.117293    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:28.117293    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:28 GMT
	I0428 18:31:28.118618    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1845"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1831","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86158 chars]
	I0428 18:31:28.122837    5100 system_pods.go:59] 12 kube-system pods found
	I0428 18:31:28.122837    5100 system_pods.go:61] "coredns-7db6d8ff4d-rp2lx" [d6f6f38d-f1f3-454e-a469-c76c8fbc5d99] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "etcd-multinode-788600" [f87bd4ae-4a5c-4587-a9e8-d381c5b76c63] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kindnet-52rrh" [49c6b5f0-286f-4bff-b719-d73a4ea4aaf3] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kindnet-hnvm4" [d01265be-d3ee-47dc-9d72-fd68a6a6eacd] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kindnet-ms872" [9dffcd3e-2cc0-414f-a465-fe37b80ad4bc] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-apiserver-multinode-788600" [5ade8d95-5387-4444-95af-604116cf695e] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-controller-manager-multinode-788600" [b7d7893e-bd95-4f96-879f-a8378040fc03] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-proxy-bkkql" [eccd7725-151c-4770-b99c-cb308b31389c] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-proxy-kc8c4" [340b4c9b-449f-4208-846e-dec867826bf7] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-proxy-sjsfc" [f06aadb7-e646-4105-af2f-0acc4a8ad174] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-scheduler-multinode-788600" [55bd2888-a3b6-498a-9352-8b15bcc5e545] Running
	I0428 18:31:28.123014    5100 system_pods.go:61] "storage-provisioner" [04bc447a-c711-4c23-ad4b-db5fd32b28d2] Running
	I0428 18:31:28.123089    5100 system_pods.go:74] duration metric: took 151.8941ms to wait for pod list to return data ...
	I0428 18:31:28.123142    5100 default_sa.go:34] waiting for default service account to be created ...
	I0428 18:31:28.311814    5100 request.go:629] Waited for 188.3166ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/default/serviceaccounts
	I0428 18:31:28.311814    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/default/serviceaccounts
	I0428 18:31:28.311814    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:28.311814    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:28.311814    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:28.316444    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:28.317105    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:28.317105    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:28.317105    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:28.317204    5100 round_trippers.go:580]     Content-Length: 262
	I0428 18:31:28.317204    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:28 GMT
	I0428 18:31:28.317204    5100 round_trippers.go:580]     Audit-Id: cd65f6c5-26c4-4ad7-aba0-8dea016a8f55
	I0428 18:31:28.317204    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:28.317204    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:28.317204    5100 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1845"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"cd75ac33-a0a3-4b71-9266-aa10ab97a649","resourceVersion":"328","creationTimestamp":"2024-04-29T01:09:02Z"}}]}
	I0428 18:31:28.317550    5100 default_sa.go:45] found service account: "default"
	I0428 18:31:28.317550    5100 default_sa.go:55] duration metric: took 194.4066ms for default service account to be created ...
	I0428 18:31:28.317659    5100 system_pods.go:116] waiting for k8s-apps to be running ...
	I0428 18:31:28.498845    5100 request.go:629] Waited for 181.1371ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:28.499029    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:28.499029    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:28.499029    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:28.499029    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:28.505707    5100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:31:28.505707    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:28.505707    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:28.506263    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:28 GMT
	I0428 18:31:28.506263    5100 round_trippers.go:580]     Audit-Id: aa46fee1-69c6-4bcc-a38e-ab3ddbb26b03
	I0428 18:31:28.506263    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:28.506263    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:28.506263    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:28.507406    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1845"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1831","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86158 chars]
	I0428 18:31:28.512076    5100 system_pods.go:86] 12 kube-system pods found
	I0428 18:31:28.512215    5100 system_pods.go:89] "coredns-7db6d8ff4d-rp2lx" [d6f6f38d-f1f3-454e-a469-c76c8fbc5d99] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "etcd-multinode-788600" [f87bd4ae-4a5c-4587-a9e8-d381c5b76c63] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kindnet-52rrh" [49c6b5f0-286f-4bff-b719-d73a4ea4aaf3] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kindnet-hnvm4" [d01265be-d3ee-47dc-9d72-fd68a6a6eacd] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kindnet-ms872" [9dffcd3e-2cc0-414f-a465-fe37b80ad4bc] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-apiserver-multinode-788600" [5ade8d95-5387-4444-95af-604116cf695e] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-controller-manager-multinode-788600" [b7d7893e-bd95-4f96-879f-a8378040fc03] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-proxy-bkkql" [eccd7725-151c-4770-b99c-cb308b31389c] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-proxy-kc8c4" [340b4c9b-449f-4208-846e-dec867826bf7] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-proxy-sjsfc" [f06aadb7-e646-4105-af2f-0acc4a8ad174] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-scheduler-multinode-788600" [55bd2888-a3b6-498a-9352-8b15bcc5e545] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "storage-provisioner" [04bc447a-c711-4c23-ad4b-db5fd32b28d2] Running
	I0428 18:31:28.512215    5100 system_pods.go:126] duration metric: took 194.5554ms to wait for k8s-apps to be running ...
	I0428 18:31:28.512215    5100 system_svc.go:44] waiting for kubelet service to be running ....
	I0428 18:31:28.523596    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 18:31:28.548090    5100 system_svc.go:56] duration metric: took 35.8758ms WaitForService to wait for kubelet
	I0428 18:31:28.548090    5100 kubeadm.go:576] duration metric: took 11.8200968s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 18:31:28.548090    5100 node_conditions.go:102] verifying NodePressure condition ...
	I0428 18:31:28.702139    5100 request.go:629] Waited for 153.8724ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes
	I0428 18:31:28.702342    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes
	I0428 18:31:28.702342    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:28.702342    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:28.702342    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:28.707188    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:28.707350    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:28.707350    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:28 GMT
	I0428 18:31:28.707350    5100 round_trippers.go:580]     Audit-Id: acdc7926-627b-4787-8c23-2d4f5214c459
	I0428 18:31:28.707350    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:28.707350    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:28.707350    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:28.707350    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:28.707958    5100 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1845"},"items":[{"metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15503 chars]
	I0428 18:31:28.709032    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:28.709032    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:28.709032    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:28.709119    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:28.709119    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:28.709119    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:28.709119    5100 node_conditions.go:105] duration metric: took 161.0283ms to run NodePressure ...
	I0428 18:31:28.709119    5100 start.go:240] waiting for startup goroutines ...
	I0428 18:31:28.709180    5100 start.go:245] waiting for cluster config update ...
	I0428 18:31:28.709206    5100 start.go:254] writing updated cluster config ...
	I0428 18:31:28.713635    5100 out.go:177] 
	I0428 18:31:28.728535    5100 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:31:28.729592    5100 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:31:28.736674    5100 out.go:177] * Starting "multinode-788600-m02" worker node in "multinode-788600" cluster
	I0428 18:31:28.739063    5100 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 18:31:28.739063    5100 cache.go:56] Caching tarball of preloaded images
	I0428 18:31:28.739414    5100 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 18:31:28.739414    5100 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 18:31:28.739414    5100 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:31:28.741647    5100 start.go:360] acquireMachinesLock for multinode-788600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 18:31:28.742058    5100 start.go:364] duration metric: took 410.2µs to acquireMachinesLock for "multinode-788600-m02"
	I0428 18:31:28.742202    5100 start.go:96] Skipping create...Using existing machine configuration
	I0428 18:31:28.742240    5100 fix.go:54] fixHost starting: m02
	I0428 18:31:28.742706    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:30.731719    5100 main.go:141] libmachine: [stdout =====>] : Off
	
	I0428 18:31:30.731719    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:30.731719    5100 fix.go:112] recreateIfNeeded on multinode-788600-m02: state=Stopped err=<nil>
	W0428 18:31:30.731719    5100 fix.go:138] unexpected machine state, will restart: <nil>
	I0428 18:31:30.737932    5100 out.go:177] * Restarting existing hyperv VM for "multinode-788600-m02" ...
	I0428 18:31:30.740224    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-788600-m02
	I0428 18:31:33.744619    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:31:33.744865    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:33.744865    5100 main.go:141] libmachine: Waiting for host to start...
	I0428 18:31:33.744865    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:35.872684    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:31:35.872684    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:35.872684    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:31:38.345518    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:31:38.345783    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:39.349110    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:41.478789    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:31:41.478985    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:41.478985    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:31:43.966341    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:31:43.967262    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:44.974390    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:47.102289    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:31:47.102289    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:47.102510    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:31:49.538127    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:31:49.538127    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:50.538957    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:52.650250    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:31:52.650250    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:52.650250    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:31:55.084780    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:31:55.084780    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:56.086813    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:58.209363    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:31:58.210203    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:58.210203    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:00.710459    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:00.710539    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:00.713463    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:02.772748    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:02.772748    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:02.773382    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:05.249675    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:05.249675    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:05.250138    5100 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:32:05.252945    5100 machine.go:94] provisionDockerMachine start ...
	I0428 18:32:05.253070    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:07.311282    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:07.311648    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:07.311648    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:09.851540    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:09.851968    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:09.857517    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:09.858234    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:09.858234    5100 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 18:32:09.987588    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 18:32:09.987588    5100 buildroot.go:166] provisioning hostname "multinode-788600-m02"
	I0428 18:32:09.987674    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:12.009811    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:12.009993    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:12.010120    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:14.460526    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:14.460526    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:14.466292    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:14.466996    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:14.466996    5100 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-788600-m02 && echo "multinode-788600-m02" | sudo tee /etc/hostname
	I0428 18:32:14.614945    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-788600-m02
	
	I0428 18:32:14.614945    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:16.646763    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:16.647833    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:16.647952    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:19.130150    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:19.130150    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:19.135386    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:19.135386    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:19.135912    5100 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-788600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-788600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-788600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 18:32:19.269802    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 18:32:19.269875    5100 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 18:32:19.269931    5100 buildroot.go:174] setting up certificates
	I0428 18:32:19.269976    5100 provision.go:84] configureAuth start
	I0428 18:32:19.269976    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:21.299985    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:21.299985    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:21.300532    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:23.785896    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:23.785896    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:23.786564    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:25.835274    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:25.835274    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:25.835486    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:28.326513    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:28.327140    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:28.327140    5100 provision.go:143] copyHostCerts
	I0428 18:32:28.327140    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 18:32:28.327140    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 18:32:28.327140    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 18:32:28.328102    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 18:32:28.329575    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 18:32:28.330124    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 18:32:28.330215    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 18:32:28.330287    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 18:32:28.331583    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 18:32:28.331858    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 18:32:28.331858    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 18:32:28.332639    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 18:32:28.333443    5100 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-788600-m02 san=[127.0.0.1 172.27.237.37 localhost minikube multinode-788600-m02]
	I0428 18:32:28.497786    5100 provision.go:177] copyRemoteCerts
	I0428 18:32:28.511364    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 18:32:28.511364    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:30.560256    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:30.560712    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:30.560991    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:33.031720    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:33.032061    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:33.032170    5100 sshutil.go:53] new ssh client: &{IP:172.27.237.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:32:33.145316    5100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.633862s)
	I0428 18:32:33.145411    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 18:32:33.145872    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 18:32:33.198469    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 18:32:33.199250    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0428 18:32:33.249609    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 18:32:33.250115    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 18:32:33.312741    5100 provision.go:87] duration metric: took 14.0427318s to configureAuth
	I0428 18:32:33.312897    5100 buildroot.go:189] setting minikube options for container-runtime
	I0428 18:32:33.313841    5100 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:32:33.314007    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:35.314823    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:35.314823    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:35.314823    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:37.773454    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:37.773454    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:37.780545    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:37.780621    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:37.780621    5100 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 18:32:37.911382    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 18:32:37.911479    5100 buildroot.go:70] root file system type: tmpfs
	I0428 18:32:37.911733    5100 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 18:32:37.911733    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:40.022110    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:40.022110    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:40.022221    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:42.596109    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:42.596981    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:42.603492    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:42.603492    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:42.604065    5100 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.239.170"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 18:32:42.759890    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.239.170
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 18:32:42.759890    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:44.747073    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:44.747511    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:44.747593    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:47.181908    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:47.181908    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:47.188297    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:47.188827    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:47.188827    5100 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 18:32:49.529003    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 18:32:49.529584    5100 machine.go:97] duration metric: took 44.2765326s to provisionDockerMachine
	I0428 18:32:49.529584    5100 start.go:293] postStartSetup for "multinode-788600-m02" (driver="hyperv")
	I0428 18:32:49.529584    5100 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 18:32:49.541764    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 18:32:49.541764    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:51.576610    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:51.576610    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:51.576610    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:54.060378    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:54.060378    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:54.060776    5100 sshutil.go:53] new ssh client: &{IP:172.27.237.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:32:54.169892    5100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.628053s)
	I0428 18:32:54.184389    5100 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 18:32:54.190850    5100 command_runner.go:130] > NAME=Buildroot
	I0428 18:32:54.190850    5100 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0428 18:32:54.190850    5100 command_runner.go:130] > ID=buildroot
	I0428 18:32:54.190850    5100 command_runner.go:130] > VERSION_ID=2023.02.9
	I0428 18:32:54.190850    5100 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0428 18:32:54.191950    5100 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 18:32:54.192074    5100 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 18:32:54.192496    5100 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 18:32:54.193473    5100 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 18:32:54.193473    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 18:32:54.208684    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 18:32:54.228930    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 18:32:54.273925    5100 start.go:296] duration metric: took 4.744136s for postStartSetup
	I0428 18:32:54.274049    5100 fix.go:56] duration metric: took 1m25.5316046s for fixHost
	I0428 18:32:54.274160    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:56.306850    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:56.307120    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:56.307120    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:58.721421    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:58.721421    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:58.729781    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:58.729925    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:58.729925    5100 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 18:32:58.850694    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714354378.855254990
	
	I0428 18:32:58.850694    5100 fix.go:216] guest clock: 1714354378.855254990
	I0428 18:32:58.850694    5100 fix.go:229] Guest: 2024-04-28 18:32:58.85525499 -0700 PDT Remote: 2024-04-28 18:32:54.2740494 -0700 PDT m=+227.568030201 (delta=4.58120559s)
	I0428 18:32:58.850694    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:33:00.855861    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:33:00.855861    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:00.855943    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:33:03.353889    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:33:03.354496    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:03.359702    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:33:03.360312    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:33:03.360312    5100 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714354378
	I0428 18:33:03.507702    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 01:32:58 UTC 2024
	
	I0428 18:33:03.507776    5100 fix.go:236] clock set: Mon Apr 29 01:32:58 UTC 2024
	 (err=<nil>)
	I0428 18:33:03.507822    5100 start.go:83] releasing machines lock for "multinode-788600-m02", held for 1m34.7655374s
	I0428 18:33:03.508023    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:33:05.461328    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:33:05.461328    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:05.461328    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:33:07.913230    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:33:07.913475    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:07.916681    5100 out.go:177] * Found network options:
	I0428 18:33:07.927793    5100 out.go:177]   - NO_PROXY=172.27.239.170
	W0428 18:33:07.930394    5100 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 18:33:07.933609    5100 out.go:177]   - NO_PROXY=172.27.239.170
	W0428 18:33:07.935889    5100 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 18:33:07.937225    5100 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 18:33:07.940076    5100 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 18:33:07.940160    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:33:07.950375    5100 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 18:33:07.950375    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:33:10.019451    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:33:10.019451    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:10.019451    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:33:10.050724    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:33:10.051108    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:10.051210    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:33:12.565621    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:33:12.565621    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:12.566812    5100 sshutil.go:53] new ssh client: &{IP:172.27.237.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:33:12.598545    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:33:12.598640    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:12.598771    5100 sshutil.go:53] new ssh client: &{IP:172.27.237.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:33:12.664665    5100 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0428 18:33:12.665276    5100 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7148894s)
	W0428 18:33:12.665374    5100 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 18:33:12.679974    5100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 18:33:12.789857    5100 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0428 18:33:12.790010    5100 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0428 18:33:12.790010    5100 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8497669s)
	I0428 18:33:12.790010    5100 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 18:33:12.790010    5100 start.go:494] detecting cgroup driver to use...
	I0428 18:33:12.790288    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 18:33:12.826620    5100 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0428 18:33:12.841093    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 18:33:12.871023    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 18:33:12.892178    5100 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 18:33:12.905247    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 18:33:12.938633    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 18:33:12.970304    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 18:33:13.001024    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 18:33:13.032485    5100 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 18:33:13.065419    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 18:33:13.096245    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 18:33:13.128214    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 18:33:13.166014    5100 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 18:33:13.183104    5100 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0428 18:33:13.193636    5100 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 18:33:13.223445    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:33:13.433968    5100 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 18:33:13.467059    5100 start.go:494] detecting cgroup driver to use...
	I0428 18:33:13.481994    5100 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 18:33:13.506238    5100 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0428 18:33:13.506238    5100 command_runner.go:130] > [Unit]
	I0428 18:33:13.506238    5100 command_runner.go:130] > Description=Docker Application Container Engine
	I0428 18:33:13.506238    5100 command_runner.go:130] > Documentation=https://docs.docker.com
	I0428 18:33:13.506238    5100 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0428 18:33:13.506238    5100 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0428 18:33:13.506238    5100 command_runner.go:130] > StartLimitBurst=3
	I0428 18:33:13.506238    5100 command_runner.go:130] > StartLimitIntervalSec=60
	I0428 18:33:13.506238    5100 command_runner.go:130] > [Service]
	I0428 18:33:13.506238    5100 command_runner.go:130] > Type=notify
	I0428 18:33:13.506238    5100 command_runner.go:130] > Restart=on-failure
	I0428 18:33:13.506238    5100 command_runner.go:130] > Environment=NO_PROXY=172.27.239.170
	I0428 18:33:13.506238    5100 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0428 18:33:13.506238    5100 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0428 18:33:13.506238    5100 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0428 18:33:13.506238    5100 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0428 18:33:13.506238    5100 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0428 18:33:13.506238    5100 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0428 18:33:13.506238    5100 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0428 18:33:13.506238    5100 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0428 18:33:13.506238    5100 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0428 18:33:13.506238    5100 command_runner.go:130] > ExecStart=
	I0428 18:33:13.506238    5100 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0428 18:33:13.506238    5100 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0428 18:33:13.506238    5100 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0428 18:33:13.506238    5100 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0428 18:33:13.506238    5100 command_runner.go:130] > LimitNOFILE=infinity
	I0428 18:33:13.506238    5100 command_runner.go:130] > LimitNPROC=infinity
	I0428 18:33:13.506781    5100 command_runner.go:130] > LimitCORE=infinity
	I0428 18:33:13.506781    5100 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0428 18:33:13.506781    5100 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0428 18:33:13.506781    5100 command_runner.go:130] > TasksMax=infinity
	I0428 18:33:13.506781    5100 command_runner.go:130] > TimeoutStartSec=0
	I0428 18:33:13.506781    5100 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0428 18:33:13.506781    5100 command_runner.go:130] > Delegate=yes
	I0428 18:33:13.506781    5100 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0428 18:33:13.506781    5100 command_runner.go:130] > KillMode=process
	I0428 18:33:13.506781    5100 command_runner.go:130] > [Install]
	I0428 18:33:13.506781    5100 command_runner.go:130] > WantedBy=multi-user.target
	I0428 18:33:13.520708    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 18:33:13.558375    5100 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 18:33:13.617753    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 18:33:13.659116    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 18:33:13.695731    5100 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 18:33:13.761229    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 18:33:13.785450    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 18:33:13.821474    5100 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0428 18:33:13.835113    5100 ssh_runner.go:195] Run: which cri-dockerd
	I0428 18:33:13.845616    5100 command_runner.go:130] > /usr/bin/cri-dockerd
	I0428 18:33:13.860160    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 18:33:13.876613    5100 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 18:33:13.922608    5100 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 18:33:14.133089    5100 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 18:33:14.319723    5100 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 18:33:14.319858    5100 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 18:33:14.365706    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:33:14.564799    5100 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 18:34:15.692524    5100 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0428 18:34:15.692592    5100 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0428 18:34:15.692592    5100 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1276455s)
	I0428 18:34:15.705979    5100 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0428 18:34:15.728446    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0428 18:34:15.728577    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.809969396Z" level=info msg="Starting up"
	I0428 18:34:15.728577    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.810971814Z" level=info msg="containerd not running, starting managed containerd"
	I0428 18:34:15.728675    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.812287837Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	I0428 18:34:15.728675    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.847769870Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0428 18:34:15.728675    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.874938755Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0428 18:34:15.728747    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875097458Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0428 18:34:15.728747    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875160459Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0428 18:34:15.728818    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875177259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.728840    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875749069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0428 18:34:15.728840    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875908772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.728929    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876188877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0428 18:34:15.728929    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876290779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.728929    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876312679Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0428 18:34:15.729020    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876324280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.877036692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.877872507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881632774Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881737076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881892779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881991681Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883069900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883201902Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883221703Z" level=info msg="metadata content store policy set" policy=shared
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900315007Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900509811Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900578112Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900636113Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900666214Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900753815Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901202723Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901383226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901578330Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901609830Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901628931Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901645731Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729588    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901661531Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729588    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901678632Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729588    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901695332Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729588    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901717232Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729588    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901736033Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729827    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901751733Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729827    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901782434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729827    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901801134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729827    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901817034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729827    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901832734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729938    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901848035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729938    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901869935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729938    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901884435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729987    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901902536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730024    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901919336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730024    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901939336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730024    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901954637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730024    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901970337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730024    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901985537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730108    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902004338Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0428 18:34:15.730108    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902045138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730108    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902061339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730108    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902075139Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0428 18:34:15.730108    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902212941Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0428 18:34:15.730212    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902320843Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0428 18:34:15.730212    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902341244Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0428 18:34:15.730212    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902354644Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0428 18:34:15.730212    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902423045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730212    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902464146Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0428 18:34:15.730378    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902479446Z" level=info msg="NRI interface is disabled by configuration."
	I0428 18:34:15.730378    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903415363Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903706068Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903861271Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.904299478Z" level=info msg="containerd successfully booted in 0.059611s"
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:48 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:48.876990250Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:48 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:48.969290393Z" level=info msg="Loading containers: start."
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.292494295Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0428 18:34:15.730591    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.376103508Z" level=info msg="Loading containers: done."
	I0428 18:34:15.730591    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.420350009Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0428 18:34:15.730591    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.421214025Z" level=info msg="Daemon has completed initialization"
	I0428 18:34:15.730646    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.531900928Z" level=info msg="API listen on /var/run/docker.sock"
	I0428 18:34:15.730646    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.531988129Z" level=info msg="API listen on [::]:2376"
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 systemd[1]: Started Docker Application Container Engine.
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.594905647Z" level=info msg="Processing signal 'terminated'"
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.597013752Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.597659553Z" level=info msg="Daemon shutdown complete"
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.598156755Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0428 18:34:15.730786    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.598169255Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0428 18:34:15.730786    5100 command_runner.go:130] > Apr 29 01:33:15 multinode-788600-m02 systemd[1]: docker.service: Deactivated successfully.
	I0428 18:34:15.730786    5100 command_runner.go:130] > Apr 29 01:33:15 multinode-788600-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0428 18:34:15.730786    5100 command_runner.go:130] > Apr 29 01:33:15 multinode-788600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0428 18:34:15.730910    5100 command_runner.go:130] > Apr 29 01:33:15 multinode-788600-m02 dockerd[1045]: time="2024-04-29T01:33:15.672598455Z" level=info msg="Starting up"
	I0428 18:34:15.730910    5100 command_runner.go:130] > Apr 29 01:34:15 multinode-788600-m02 dockerd[1045]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0428 18:34:15.730910    5100 command_runner.go:130] > Apr 29 01:34:15 multinode-788600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0428 18:34:15.730910    5100 command_runner.go:130] > Apr 29 01:34:15 multinode-788600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0428 18:34:15.730910    5100 command_runner.go:130] > Apr 29 01:34:15 multinode-788600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0428 18:34:15.739602    5100 out.go:177] 
	W0428 18:34:15.742382    5100 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 01:32:47 multinode-788600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.809969396Z" level=info msg="Starting up"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.810971814Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.812287837Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.847769870Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.874938755Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875097458Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875160459Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875177259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875749069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875908772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876188877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876290779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876312679Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876324280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.877036692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.877872507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881632774Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881737076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881892779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881991681Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883069900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883201902Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883221703Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900315007Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900509811Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900578112Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900636113Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900666214Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900753815Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901202723Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901383226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901578330Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901609830Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901628931Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901645731Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901661531Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901678632Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901695332Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901717232Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901736033Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901751733Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901782434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901801134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901817034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901832734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901848035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901869935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901884435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901902536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901919336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901939336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901954637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901970337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901985537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902004338Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902045138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902061339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902075139Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902212941Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902320843Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902341244Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902354644Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902423045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902464146Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902479446Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903415363Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903706068Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903861271Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.904299478Z" level=info msg="containerd successfully booted in 0.059611s"
	Apr 29 01:32:48 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:48.876990250Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 01:32:48 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:48.969290393Z" level=info msg="Loading containers: start."
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.292494295Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.376103508Z" level=info msg="Loading containers: done."
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.420350009Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.421214025Z" level=info msg="Daemon has completed initialization"
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.531900928Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.531988129Z" level=info msg="API listen on [::]:2376"
	Apr 29 01:32:49 multinode-788600-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 01:33:14 multinode-788600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.594905647Z" level=info msg="Processing signal 'terminated'"
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.597013752Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.597659553Z" level=info msg="Daemon shutdown complete"
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.598156755Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.598169255Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 01:33:15 multinode-788600-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 01:33:15 multinode-788600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 01:33:15 multinode-788600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 01:33:15 multinode-788600-m02 dockerd[1045]: time="2024-04-29T01:33:15.672598455Z" level=info msg="Starting up"
	Apr 29 01:34:15 multinode-788600-m02 dockerd[1045]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 01:34:15 multinode-788600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 01:34:15 multinode-788600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 01:34:15 multinode-788600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0428 18:34:15.742938    5100 out.go:239] * 
	W0428 18:34:15.744099    5100 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 18:34:15.746768    5100 out.go:177] 
	
	
	==> Docker <==
	Apr 29 01:31:20 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:20.192798273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 01:31:20 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:20.192815665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:31:20 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:20.192905226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:31:20 multinode-788600 cri-dockerd[1283]: time="2024-04-29T01:31:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dea947e4b267e26015fd02b05999931c5d15cdf9c0e4a41ce1c508c898d48d2e/resolv.conf as [nameserver 172.27.224.1]"
	Apr 29 01:31:20 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:20.540500991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 01:31:20 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:20.540801057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 01:31:20 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:20.540813752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:31:20 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:20.540979478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:31:20 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:20.707898608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 01:31:20 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:20.708019154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 01:31:20 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:20.708039845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:31:20 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:20.708121309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:31:20 multinode-788600 cri-dockerd[1283]: time="2024-04-29T01:31:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/faa3aa22a49af636bcdb5899779442ac222d821a7fa50dd30cd32fa6402bf907/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 01:31:21 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:21.048222904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 01:31:21 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:21.048349906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 01:31:21 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:21.048369306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:31:21 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:21.048583609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:31:44 multinode-788600 dockerd[1056]: time="2024-04-29T01:31:44.200185664Z" level=info msg="ignoring event" container=095a245b1d2bf21636ffde23dd5c5870384c2efe0fde1ff21c738d02ecbad189 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 01:31:44 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:44.201210477Z" level=info msg="shim disconnected" id=095a245b1d2bf21636ffde23dd5c5870384c2efe0fde1ff21c738d02ecbad189 namespace=moby
	Apr 29 01:31:44 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:44.202700296Z" level=warning msg="cleaning up after shim disconnected" id=095a245b1d2bf21636ffde23dd5c5870384c2efe0fde1ff21c738d02ecbad189 namespace=moby
	Apr 29 01:31:44 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:44.202976799Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 01:31:57 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:57.620883378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 01:31:57 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:57.621051281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 01:31:57 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:57.621073181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:31:57 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:57.621271283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9a287b9d74963       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   c9c6fe831ace4       storage-provisioner
	aac9ab11d8404       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   faa3aa22a49af       busybox-fc5497c4f-4qvlm
	871f1babd92ce       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   dea947e4b267e       coredns-7db6d8ff4d-rp2lx
	a9806e7345fc9       4950bb10b3f87                                                                                         3 minutes ago       Running             kindnet-cni               1                   a2f37ed6a52fb       kindnet-52rrh
	b16bbceb6bdee       a0bf559e280cf                                                                                         3 minutes ago       Running             kube-proxy                1                   330975770c2cb       kube-proxy-bkkql
	095a245b1d2bf       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   c9c6fe831ace4       storage-provisioner
	ace8dc8c78d56       c42f13656d0b2                                                                                         3 minutes ago       Running             kube-apiserver            0                   79616a5b9f290       kube-apiserver-multinode-788600
	22857de4092ae       c7aad43836fa5                                                                                         3 minutes ago       Running             kube-controller-manager   1                   b9e44b89472c5       kube-controller-manager-multinode-788600
	64707d485e51b       3861cfcd7c04c                                                                                         3 minutes ago       Running             etcd                      0                   dfe4b0f43edfa       etcd-multinode-788600
	705d4c5c927e7       259c8277fcbbc                                                                                         3 minutes ago       Running             kube-scheduler            1                   a1f5f4944d7ec       kube-scheduler-multinode-788600
	d0d5fbf9b871e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago      Exited              busybox                   0                   fcbd24a1db2d8       busybox-fc5497c4f-4qvlm
	64e6fcf4a3f2f       cbb01a7bd410d                                                                                         25 minutes ago      Exited              coredns                   0                   70af634f6134d       coredns-7db6d8ff4d-rp2lx
	33e59494d8be9       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago      Exited              kindnet-cni               0                   d1342e9d71114       kindnet-52rrh
	8542b2c39cf5b       a0bf559e280cf                                                                                         25 minutes ago      Exited              kube-proxy                0                   776d075f3716e       kube-proxy-bkkql
	d55fefd692cfc       259c8277fcbbc                                                                                         25 minutes ago      Exited              kube-scheduler            0                   26381d4606b51       kube-scheduler-multinode-788600
	edb2c636ad5d7       c7aad43836fa5                                                                                         25 minutes ago      Exited              kube-controller-manager   0                   9ffe1b8b41e4c       kube-controller-manager-multinode-788600
	
	
	==> coredns [64e6fcf4a3f2] <==
	[INFO] 10.244.0.3:53871 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001397s
	[INFO] 10.244.0.3:34178 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001399s
	[INFO] 10.244.0.3:59684 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001391s
	[INFO] 10.244.0.3:35758 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0003144s
	[INFO] 10.244.0.3:54201 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000513s
	[INFO] 10.244.0.3:57683 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000876s
	[INFO] 10.244.0.3:49694 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001237s
	[INFO] 10.244.1.2:48711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229s
	[INFO] 10.244.1.2:37460 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001261s
	[INFO] 10.244.1.2:32950 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001014s
	[INFO] 10.244.1.2:49157 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000511s
	[INFO] 10.244.0.3:49454 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003908s
	[INFO] 10.244.0.3:56632 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000654s
	[INFO] 10.244.0.3:51203 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000936s
	[INFO] 10.244.0.3:53433 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001697s
	[INFO] 10.244.1.2:54748 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001237s
	[INFO] 10.244.1.2:55201 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002599s
	[INFO] 10.244.1.2:45426 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000815s
	[INFO] 10.244.1.2:49822 - 5 "PTR IN 1.224.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001063s
	[INFO] 10.244.0.3:38954 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118s
	[INFO] 10.244.0.3:58102 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002236s
	[INFO] 10.244.0.3:48832 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001238s
	[INFO] 10.244.0.3:49749 - 5 "PTR IN 1.224.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001072s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [871f1babd92c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35167 - 57024 "HINFO IN 6138708222212467430.87596895660326264. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.027490357s
	
	
	==> describe nodes <==
	Name:               multinode-788600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-788600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=multinode-788600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T18_08_50_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 01:08:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-788600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 01:34:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 01:31:16 +0000   Mon, 29 Apr 2024 01:08:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 01:31:16 +0000   Mon, 29 Apr 2024 01:08:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 01:31:16 +0000   Mon, 29 Apr 2024 01:08:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 01:31:16 +0000   Mon, 29 Apr 2024 01:31:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.239.170
	  Hostname:    multinode-788600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad4b84e5f48240b1ba6c29345f8a41f7
	  System UUID:                6f78c2a9-1744-3642-a944-13bbeb7f5c76
	  Boot ID:                    5454e797-3a96-4b7c-aeb3-6a513f59521a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4qvlm                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-7db6d8ff4d-rp2lx                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     25m
	  kube-system                 etcd-multinode-788600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m24s
	  kube-system                 kindnet-52rrh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	  kube-system                 kube-apiserver-multinode-788600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kube-system                 kube-controller-manager-multinode-788600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-proxy-bkkql                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-scheduler-multinode-788600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 25m                    kube-proxy       
	  Normal  Starting                 3m22s                  kube-proxy       
	  Normal  Starting                 25m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25m (x6 over 25m)      kubelet          Node multinode-788600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m (x6 over 25m)      kubelet          Node multinode-788600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25m (x6 over 25m)      kubelet          Node multinode-788600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 25m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     25m                    kubelet          Node multinode-788600 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    25m                    kubelet          Node multinode-788600 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  25m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25m                    kubelet          Node multinode-788600 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           25m                    node-controller  Node multinode-788600 event: Registered Node multinode-788600 in Controller
	  Normal  NodeReady                25m                    kubelet          Node multinode-788600 status is now: NodeReady
	  Normal  Starting                 3m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m30s (x8 over 3m30s)  kubelet          Node multinode-788600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m30s (x8 over 3m30s)  kubelet          Node multinode-788600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m30s (x7 over 3m30s)  kubelet          Node multinode-788600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m12s                  node-controller  Node multinode-788600 event: Registered Node multinode-788600 in Controller
	
	
	Name:               multinode-788600-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-788600-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=multinode-788600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_28T18_11_53_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 01:11:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-788600-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 01:28:05 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 01:23:05 +0000   Mon, 29 Apr 2024 01:32:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 01:23:05 +0000   Mon, 29 Apr 2024 01:32:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 01:23:05 +0000   Mon, 29 Apr 2024 01:32:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 01:23:05 +0000   Mon, 29 Apr 2024 01:32:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.27.230.221
	  Hostname:    multinode-788600-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f3f256c1ef74f1aabdee6846e11e827
	  System UUID:                ea348b67-6b29-8b46-84e3-ebf01858b203
	  Boot ID:                    23d1db59-b5c6-484d-aa22-1e61e2ff3b17
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4fdn6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kindnet-hnvm4              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	  kube-system                 kube-proxy-kc8c4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientMemory  22m (x2 over 22m)  kubelet          Node multinode-788600-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x2 over 22m)  kubelet          Node multinode-788600-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x2 over 22m)  kubelet          Node multinode-788600-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node multinode-788600-m02 event: Registered Node multinode-788600-m02 in Controller
	  Normal  NodeReady                22m                kubelet          Node multinode-788600-m02 status is now: NodeReady
	  Normal  RegisteredNode           3m12s              node-controller  Node multinode-788600-m02 event: Registered Node multinode-788600-m02 in Controller
	  Normal  NodeNotReady             2m32s              node-controller  Node multinode-788600-m02 status is now: NodeNotReady
	
	
	Name:               multinode-788600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-788600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=multinode-788600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_28T18_26_47_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 01:26:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-788600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 01:27:37 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 01:26:54 +0000   Mon, 29 Apr 2024 01:28:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 01:26:54 +0000   Mon, 29 Apr 2024 01:28:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 01:26:54 +0000   Mon, 29 Apr 2024 01:28:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 01:26:54 +0000   Mon, 29 Apr 2024 01:28:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.27.237.64
	  Hostname:    multinode-788600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c41584b342a04f81be6af8e0fa2662b4
	  System UUID:                d2bef039-c806-3e44-a4e9-030b4d9c5429
	  Boot ID:                    c5373a0a-c184-4269-b8cc-bb1d84dda438
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ms872       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-proxy-sjsfc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 17m                    kube-proxy       
	  Normal  Starting                 7m46s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m (x2 over 18m)      kubelet          Node multinode-788600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x2 over 18m)      kubelet          Node multinode-788600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x2 over 18m)      kubelet          Node multinode-788600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                17m                    kubelet          Node multinode-788600-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  7m50s (x2 over 7m50s)  kubelet          Node multinode-788600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m50s (x2 over 7m50s)  kubelet          Node multinode-788600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m50s (x2 over 7m50s)  kubelet          Node multinode-788600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m49s                  node-controller  Node multinode-788600-m03 event: Registered Node multinode-788600-m03 in Controller
	  Normal  NodeReady                7m42s                  kubelet          Node multinode-788600-m03 status is now: NodeReady
	  Normal  NodeNotReady             6m14s                  node-controller  Node multinode-788600-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           3m12s                  node-controller  Node multinode-788600-m03 event: Registered Node multinode-788600-m03 in Controller
	
	
	==> dmesg <==
	[  +1.336669] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.221212] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.025561] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr29 01:30] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.106257] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.071543] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[ +25.571821] systemd-fstab-generator[982]: Ignoring "noauto" option for root device
	[  +0.114126] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.563040] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	[  +0.196534] systemd-fstab-generator[1034]: Ignoring "noauto" option for root device
	[  +0.227945] systemd-fstab-generator[1048]: Ignoring "noauto" option for root device
	[Apr29 01:31] systemd-fstab-generator[1235]: Ignoring "noauto" option for root device
	[  +0.197785] systemd-fstab-generator[1247]: Ignoring "noauto" option for root device
	[  +0.196086] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	[  +0.277819] systemd-fstab-generator[1274]: Ignoring "noauto" option for root device
	[  +0.898631] systemd-fstab-generator[1388]: Ignoring "noauto" option for root device
	[  +0.107432] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.348423] systemd-fstab-generator[1529]: Ignoring "noauto" option for root device
	[  +2.115772] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.067702] kauditd_printk_skb: 10 callbacks suppressed
	[  +3.690885] systemd-fstab-generator[2336]: Ignoring "noauto" option for root device
	[  +3.420808] kauditd_printk_skb: 70 callbacks suppressed
	[ +13.045790] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [64707d485e51] <==
	{"level":"info","ts":"2024-04-29T01:31:08.590486Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T01:31:08.590514Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T01:31:08.591082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"922ab80e8fb68af1 switched to configuration voters=(10532433051239484145)"}
	{"level":"info","ts":"2024-04-29T01:31:08.594273Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T01:31:08.594551Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"557f475d665ae496","local-member-id":"922ab80e8fb68af1","added-peer-id":"922ab80e8fb68af1","added-peer-peer-urls":["https://172.27.231.169:2380"]}
	{"level":"info","ts":"2024-04-29T01:31:08.594873Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"557f475d665ae496","local-member-id":"922ab80e8fb68af1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T01:31:08.594932Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T01:31:08.595404Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.27.239.170:2380"}
	{"level":"info","ts":"2024-04-29T01:31:08.603393Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"922ab80e8fb68af1","initial-advertise-peer-urls":["https://172.27.239.170:2380"],"listen-peer-urls":["https://172.27.239.170:2380"],"advertise-client-urls":["https://172.27.239.170:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.239.170:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T01:31:08.603549Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T01:31:08.60385Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.27.239.170:2380"}
	{"level":"info","ts":"2024-04-29T01:31:09.939198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"922ab80e8fb68af1 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T01:31:09.939277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"922ab80e8fb68af1 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T01:31:09.939326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"922ab80e8fb68af1 received MsgPreVoteResp from 922ab80e8fb68af1 at term 2"}
	{"level":"info","ts":"2024-04-29T01:31:09.939343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"922ab80e8fb68af1 became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T01:31:09.93935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"922ab80e8fb68af1 received MsgVoteResp from 922ab80e8fb68af1 at term 3"}
	{"level":"info","ts":"2024-04-29T01:31:09.93936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"922ab80e8fb68af1 became leader at term 3"}
	{"level":"info","ts":"2024-04-29T01:31:09.939374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 922ab80e8fb68af1 elected leader 922ab80e8fb68af1 at term 3"}
	{"level":"info","ts":"2024-04-29T01:31:09.947217Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"922ab80e8fb68af1","local-member-attributes":"{Name:multinode-788600 ClientURLs:[https://172.27.239.170:2379]}","request-path":"/0/members/922ab80e8fb68af1/attributes","cluster-id":"557f475d665ae496","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T01:31:09.94725Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T01:31:09.947267Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T01:31:09.948622Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T01:31:09.948642Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T01:31:09.95067Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T01:31:09.95067Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.239.170:2379"}
	
	
	==> kernel <==
	 01:34:36 up 5 min,  0 users,  load average: 0.27, 0.28, 0.13
	Linux multinode-788600 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [33e59494d8be] <==
	I0429 01:28:03.302125       1 main.go:250] Node multinode-788600-m03 has CIDR [10.244.3.0/24] 
	I0429 01:28:13.311628       1 main.go:223] Handling node with IPs: map[172.27.231.169:{}]
	I0429 01:28:13.311750       1 main.go:227] handling current node
	I0429 01:28:13.311809       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:28:13.311821       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:28:13.312461       1 main.go:223] Handling node with IPs: map[172.27.237.64:{}]
	I0429 01:28:13.312599       1 main.go:250] Node multinode-788600-m03 has CIDR [10.244.3.0/24] 
	I0429 01:28:23.327565       1 main.go:223] Handling node with IPs: map[172.27.231.169:{}]
	I0429 01:28:23.327670       1 main.go:227] handling current node
	I0429 01:28:23.327685       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:28:23.327693       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:28:23.328051       1 main.go:223] Handling node with IPs: map[172.27.237.64:{}]
	I0429 01:28:23.328081       1 main.go:250] Node multinode-788600-m03 has CIDR [10.244.3.0/24] 
	I0429 01:28:33.338514       1 main.go:223] Handling node with IPs: map[172.27.231.169:{}]
	I0429 01:28:33.338596       1 main.go:227] handling current node
	I0429 01:28:33.338609       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:28:33.338616       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:28:33.339035       1 main.go:223] Handling node with IPs: map[172.27.237.64:{}]
	I0429 01:28:33.339064       1 main.go:250] Node multinode-788600-m03 has CIDR [10.244.3.0/24] 
	I0429 01:28:43.358460       1 main.go:223] Handling node with IPs: map[172.27.231.169:{}]
	I0429 01:28:43.358485       1 main.go:227] handling current node
	I0429 01:28:43.358495       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:28:43.358501       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:28:43.358607       1 main.go:223] Handling node with IPs: map[172.27.237.64:{}]
	I0429 01:28:43.358615       1 main.go:250] Node multinode-788600-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [a9806e7345fc] <==
	I0429 01:33:55.395242       1 main.go:250] Node multinode-788600-m03 has CIDR [10.244.3.0/24] 
	I0429 01:34:05.404197       1 main.go:223] Handling node with IPs: map[172.27.239.170:{}]
	I0429 01:34:05.404304       1 main.go:227] handling current node
	I0429 01:34:05.404318       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:34:05.404325       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:34:05.404453       1 main.go:223] Handling node with IPs: map[172.27.237.64:{}]
	I0429 01:34:05.404481       1 main.go:250] Node multinode-788600-m03 has CIDR [10.244.3.0/24] 
	I0429 01:34:15.411919       1 main.go:223] Handling node with IPs: map[172.27.239.170:{}]
	I0429 01:34:15.412058       1 main.go:227] handling current node
	I0429 01:34:15.412071       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:34:15.412079       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:34:15.412443       1 main.go:223] Handling node with IPs: map[172.27.237.64:{}]
	I0429 01:34:15.412477       1 main.go:250] Node multinode-788600-m03 has CIDR [10.244.3.0/24] 
	I0429 01:34:25.434302       1 main.go:223] Handling node with IPs: map[172.27.239.170:{}]
	I0429 01:34:25.434341       1 main.go:227] handling current node
	I0429 01:34:25.434354       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:34:25.434360       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:34:25.434832       1 main.go:223] Handling node with IPs: map[172.27.237.64:{}]
	I0429 01:34:25.434957       1 main.go:250] Node multinode-788600-m03 has CIDR [10.244.3.0/24] 
	I0429 01:34:35.444343       1 main.go:223] Handling node with IPs: map[172.27.239.170:{}]
	I0429 01:34:35.444450       1 main.go:227] handling current node
	I0429 01:34:35.444463       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:34:35.444471       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:34:35.444575       1 main.go:223] Handling node with IPs: map[172.27.237.64:{}]
	I0429 01:34:35.444588       1 main.go:250] Node multinode-788600-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [ace8dc8c78d5] <==
	I0429 01:31:11.670353       1 aggregator.go:165] initial CRD sync complete...
	I0429 01:31:11.670367       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 01:31:11.670374       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 01:31:11.670381       1 cache.go:39] Caches are synced for autoregister controller
	I0429 01:31:11.713290       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 01:31:11.719044       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 01:31:11.719588       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 01:31:11.719870       1 policy_source.go:224] refreshing policies
	I0429 01:31:11.721136       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 01:31:11.721383       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 01:31:11.721444       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 01:31:11.726066       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 01:31:11.732819       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 01:31:11.736360       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 01:31:11.754183       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 01:31:12.531066       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0429 01:31:13.251224       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.231.169 172.27.239.170]
	I0429 01:31:13.254587       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 01:31:13.287928       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 01:31:14.626912       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 01:31:14.850074       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 01:31:14.883026       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 01:31:15.050651       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 01:31:15.073275       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0429 01:31:33.250172       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.239.170]
	
	
	==> kube-controller-manager [22857de4092a] <==
	I0429 01:31:24.383944       1 shared_informer.go:320] Caches are synced for attach detach
	I0429 01:31:24.387877       1 shared_informer.go:320] Caches are synced for daemon sets
	I0429 01:31:24.392179       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0429 01:31:24.394525       1 shared_informer.go:320] Caches are synced for disruption
	I0429 01:31:24.420567       1 shared_informer.go:320] Caches are synced for TTL
	I0429 01:31:24.433566       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0429 01:31:24.454141       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0429 01:31:24.456934       1 shared_informer.go:320] Caches are synced for node
	I0429 01:31:24.457178       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0429 01:31:24.457311       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0429 01:31:24.457340       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0429 01:31:24.457349       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0429 01:31:24.480368       1 shared_informer.go:320] Caches are synced for taint
	I0429 01:31:24.481351       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0429 01:31:24.511463       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-788600"
	I0429 01:31:24.511797       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-788600-m02"
	I0429 01:31:24.517067       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-788600-m03"
	I0429 01:31:24.517148       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0429 01:31:24.522249       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 01:31:24.523170       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 01:31:24.951816       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 01:31:24.951855       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 01:31:24.960529       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 01:32:04.630813       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.212701ms"
	I0429 01:32:04.632363       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.4µs"
	
	
	==> kube-controller-manager [edb2c636ad5d] <==
	I0429 01:09:14.942008       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.7µs"
	I0429 01:09:17.024665       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 01:11:53.161790       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-788600-m02\" does not exist"
	I0429 01:11:53.177770       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-788600-m02" podCIDRs=["10.244.1.0/24"]
	I0429 01:11:57.056826       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-788600-m02"
	I0429 01:12:12.447989       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-788600-m02"
	I0429 01:12:38.086505       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.050872ms"
	I0429 01:12:38.156586       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.927316ms"
	I0429 01:12:38.156985       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.8µs"
	I0429 01:12:40.843412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.957702ms"
	I0429 01:12:40.844132       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.3µs"
	I0429 01:12:40.953439       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.253802ms"
	I0429 01:12:40.953522       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.8µs"
	I0429 01:16:25.628360       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-788600-m02"
	I0429 01:16:25.629372       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-788600-m03\" does not exist"
	I0429 01:16:25.644835       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-788600-m03" podCIDRs=["10.244.2.0/24"]
	I0429 01:16:27.127052       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-788600-m03"
	I0429 01:16:44.649366       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-788600-m02"
	I0429 01:24:07.261198       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-788600-m02"
	I0429 01:26:40.701566       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-788600-m02"
	I0429 01:26:46.734897       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-788600-m03\" does not exist"
	I0429 01:26:46.736292       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-788600-m02"
	I0429 01:26:46.764001       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-788600-m03" podCIDRs=["10.244.3.0/24"]
	I0429 01:26:54.696904       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-788600-m02"
	I0429 01:28:22.452429       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-788600-m02"
	
	
	==> kube-proxy [8542b2c39cf5] <==
	I0429 01:09:05.708863       1 server_linux.go:69] "Using iptables proxy"
	I0429 01:09:05.742050       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.231.169"]
	I0429 01:09:05.825870       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 01:09:05.825916       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 01:09:05.826023       1 server_linux.go:165] "Using iptables Proxier"
	I0429 01:09:05.838937       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 01:09:05.840502       1 server.go:872] "Version info" version="v1.30.0"
	I0429 01:09:05.840525       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 01:09:05.843961       1 config.go:192] "Starting service config controller"
	I0429 01:09:05.846365       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 01:09:05.846409       1 config.go:319] "Starting node config controller"
	I0429 01:09:05.846416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 01:09:05.849462       1 config.go:101] "Starting endpoint slice config controller"
	I0429 01:09:05.849804       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 01:09:05.946586       1 shared_informer.go:320] Caches are synced for node config
	I0429 01:09:05.946631       1 shared_informer.go:320] Caches are synced for service config
	I0429 01:09:05.953363       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [b16bbceb6bde] <==
	I0429 01:31:14.456633       1 server_linux.go:69] "Using iptables proxy"
	I0429 01:31:14.508160       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.239.170"]
	I0429 01:31:14.653659       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 01:31:14.653749       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 01:31:14.653771       1 server_linux.go:165] "Using iptables Proxier"
	I0429 01:31:14.664302       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 01:31:14.666172       1 server.go:872] "Version info" version="v1.30.0"
	I0429 01:31:14.666194       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 01:31:14.669815       1 config.go:192] "Starting service config controller"
	I0429 01:31:14.671494       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 01:31:14.671761       1 config.go:101] "Starting endpoint slice config controller"
	I0429 01:31:14.672103       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 01:31:14.672303       1 config.go:319] "Starting node config controller"
	I0429 01:31:14.678976       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 01:31:14.772647       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 01:31:14.772720       1 shared_informer.go:320] Caches are synced for service config
	I0429 01:31:14.779371       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [705d4c5c927e] <==
	I0429 01:31:09.468784       1 serving.go:380] Generated self-signed cert in-memory
	W0429 01:31:11.642384       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 01:31:11.642434       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 01:31:11.642447       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 01:31:11.642454       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 01:31:11.677884       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 01:31:11.677974       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 01:31:11.680797       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 01:31:11.680837       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 01:31:11.681224       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 01:31:11.684058       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 01:31:11.781602       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d55fefd692cf] <==
	E0429 01:08:46.888518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 01:08:47.003501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 01:08:47.003561       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 01:08:47.057469       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 01:08:47.059611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 01:08:47.081787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 01:08:47.082341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 01:08:47.119979       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 01:08:47.120206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 01:08:47.214340       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 01:08:47.214395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 01:08:47.226615       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 01:08:47.226976       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 01:08:47.234210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 01:08:47.234301       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 01:08:47.252946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 01:08:47.253198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 01:08:47.278229       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 01:08:47.278421       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 01:08:47.396441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 01:08:47.396483       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 01:08:47.456293       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 01:08:47.456674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0429 01:08:49.334502       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 01:28:45.556004       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 29 01:31:16 multinode-788600 kubelet[1536]: E0429 01:31:16.428288    1536 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-rp2lx" podUID="d6f6f38d-f1f3-454e-a469-c76c8fbc5d99"
	Apr 29 01:31:16 multinode-788600 kubelet[1536]: E0429 01:31:16.429275    1536 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-4qvlm" podUID="a724a733-4b18-4f15-8918-9fe472fcd02c"
	Apr 29 01:31:16 multinode-788600 kubelet[1536]: I0429 01:31:16.900854    1536 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Apr 29 01:31:20 multinode-788600 kubelet[1536]: I0429 01:31:20.435713    1536 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dea947e4b267e26015fd02b05999931c5d15cdf9c0e4a41ce1c508c898d48d2e"
	Apr 29 01:31:44 multinode-788600 kubelet[1536]: I0429 01:31:44.812186    1536 scope.go:117] "RemoveContainer" containerID="16ea9b9acd267cf8308f1f96b03ab43c846b40a3396ec52a16efccc8f8101f69"
	Apr 29 01:31:44 multinode-788600 kubelet[1536]: I0429 01:31:44.812614    1536 scope.go:117] "RemoveContainer" containerID="095a245b1d2bf21636ffde23dd5c5870384c2efe0fde1ff21c738d02ecbad189"
	Apr 29 01:31:44 multinode-788600 kubelet[1536]: E0429 01:31:44.812917    1536 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(04bc447a-c711-4c23-ad4b-db5fd32b28d2)\"" pod="kube-system/storage-provisioner" podUID="04bc447a-c711-4c23-ad4b-db5fd32b28d2"
	Apr 29 01:31:57 multinode-788600 kubelet[1536]: I0429 01:31:57.426875    1536 scope.go:117] "RemoveContainer" containerID="095a245b1d2bf21636ffde23dd5c5870384c2efe0fde1ff21c738d02ecbad189"
	Apr 29 01:32:06 multinode-788600 kubelet[1536]: I0429 01:32:06.400415    1536 scope.go:117] "RemoveContainer" containerID="e148c0cdbae012e13553185eaf9647e7246c72513d9635d3374eb7ff14f06607"
	Apr 29 01:32:06 multinode-788600 kubelet[1536]: I0429 01:32:06.451784    1536 scope.go:117] "RemoveContainer" containerID="27388b03fb268ba63831b1854067c0397773cf8e5fd633f335a773b88f2779ee"
	Apr 29 01:32:06 multinode-788600 kubelet[1536]: E0429 01:32:06.453345    1536 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 01:32:06 multinode-788600 kubelet[1536]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 01:32:06 multinode-788600 kubelet[1536]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 01:32:06 multinode-788600 kubelet[1536]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 01:32:06 multinode-788600 kubelet[1536]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 01:33:06 multinode-788600 kubelet[1536]: E0429 01:33:06.452130    1536 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 01:33:06 multinode-788600 kubelet[1536]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 01:33:06 multinode-788600 kubelet[1536]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 01:33:06 multinode-788600 kubelet[1536]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 01:33:06 multinode-788600 kubelet[1536]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 01:34:06 multinode-788600 kubelet[1536]: E0429 01:34:06.449749    1536 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 01:34:06 multinode-788600 kubelet[1536]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 01:34:06 multinode-788600 kubelet[1536]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 01:34:06 multinode-788600 kubelet[1536]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 01:34:06 multinode-788600 kubelet[1536]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 18:34:29.035410    8224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-788600 -n multinode-788600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-788600 -n multinode-788600: (11.5163567s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-788600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (439.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (88.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 node delete m03: (31.2774181s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 status --alsologtostderr
E0428 18:35:36.435252    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-788600 status --alsologtostderr: exit status 2 (23.455784s)

                                                
                                                
-- stdout --
	multinode-788600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-788600-m02
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 18:35:22.044869   11684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0428 18:35:22.054201   11684 out.go:291] Setting OutFile to fd 1952 ...
	I0428 18:35:22.054949   11684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 18:35:22.054949   11684 out.go:304] Setting ErrFile to fd 1948...
	I0428 18:35:22.054949   11684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 18:35:22.075843   11684 out.go:298] Setting JSON to false
	I0428 18:35:22.076001   11684 mustload.go:65] Loading cluster: multinode-788600
	I0428 18:35:22.076127   11684 notify.go:220] Checking for updates...
	I0428 18:35:22.077606   11684 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:35:22.077606   11684 status.go:255] checking status of multinode-788600 ...
	I0428 18:35:22.079389   11684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:35:24.221323   11684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:35:24.221390   11684 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:35:24.221456   11684 status.go:330] multinode-788600 host status = "Running" (err=<nil>)
	I0428 18:35:24.221567   11684 host.go:66] Checking if "multinode-788600" exists ...
	I0428 18:35:24.222834   11684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:35:26.307970   11684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:35:26.308219   11684 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:35:26.308316   11684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:35:28.784877   11684 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:35:28.784877   11684 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:35:28.784877   11684 host.go:66] Checking if "multinode-788600" exists ...
	I0428 18:35:28.799000   11684 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0428 18:35:28.799000   11684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:35:30.828852   11684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:35:30.828852   11684 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:35:30.828852   11684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:35:33.344549   11684 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:35:33.345059   11684 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:35:33.345358   11684 sshutil.go:53] new ssh client: &{IP:172.27.239.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:35:33.462126   11684 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6631149s)
	I0428 18:35:33.479115   11684 ssh_runner.go:195] Run: systemctl --version
	I0428 18:35:33.503242   11684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 18:35:33.535469   11684 kubeconfig.go:125] found "multinode-788600" server: "https://172.27.239.170:8443"
	I0428 18:35:33.535469   11684 api_server.go:166] Checking apiserver status ...
	I0428 18:35:33.550688   11684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:35:33.594430   11684 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1873/cgroup
	W0428 18:35:33.614049   11684 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1873/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0428 18:35:33.627312   11684 ssh_runner.go:195] Run: ls
	I0428 18:35:33.638094   11684 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:35:33.646184   11684 api_server.go:279] https://172.27.239.170:8443/healthz returned 200:
	ok
	I0428 18:35:33.646341   11684 status.go:422] multinode-788600 apiserver status = Running (err=<nil>)
	I0428 18:35:33.646456   11684 status.go:257] multinode-788600 status: &{Name:multinode-788600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0428 18:35:33.646646   11684 status.go:255] checking status of multinode-788600-m02 ...
	I0428 18:35:33.647167   11684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:35:35.752115   11684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:35:35.752115   11684 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:35:35.752115   11684 status.go:330] multinode-788600-m02 host status = "Running" (err=<nil>)
	I0428 18:35:35.752115   11684 host.go:66] Checking if "multinode-788600-m02" exists ...
	I0428 18:35:35.752668   11684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:35:37.957811   11684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:35:37.957866   11684 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:35:37.957943   11684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:35:40.532351   11684 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:35:40.532351   11684 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:35:40.532635   11684 host.go:66] Checking if "multinode-788600-m02" exists ...
	I0428 18:35:40.545768   11684 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0428 18:35:40.545768   11684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:35:42.624807   11684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:35:42.624807   11684 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:35:42.624807   11684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:35:45.203299   11684 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:35:45.203299   11684 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:35:45.204032   11684 sshutil.go:53] new ssh client: &{IP:172.27.237.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:35:45.307847   11684 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7620671s)
	I0428 18:35:45.322754   11684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 18:35:45.347862   11684 status.go:257] multinode-788600-m02 status: &{Name:multinode-788600-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-788600 status --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-788600 -n multinode-788600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-788600 -n multinode-788600: (11.5649718s)
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 logs -n 25: (8.6569577s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-788600 cp multinode-788600-m02:/home/docker/cp-test.txt                                                        | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:20 PDT | 28 Apr 24 18:20 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile2232407997\001\cp-test_multinode-788600-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n                                                                                                  | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:20 PDT | 28 Apr 24 18:20 PDT |
	|         | multinode-788600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-788600 cp multinode-788600-m02:/home/docker/cp-test.txt                                                        | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:20 PDT | 28 Apr 24 18:20 PDT |
	|         | multinode-788600:/home/docker/cp-test_multinode-788600-m02_multinode-788600.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n                                                                                                  | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:20 PDT | 28 Apr 24 18:20 PDT |
	|         | multinode-788600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n multinode-788600 sudo cat                                                                        | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:20 PDT | 28 Apr 24 18:21 PDT |
	|         | /home/docker/cp-test_multinode-788600-m02_multinode-788600.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-788600 cp multinode-788600-m02:/home/docker/cp-test.txt                                                        | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:21 PDT | 28 Apr 24 18:21 PDT |
	|         | multinode-788600-m03:/home/docker/cp-test_multinode-788600-m02_multinode-788600-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n                                                                                                  | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:21 PDT | 28 Apr 24 18:21 PDT |
	|         | multinode-788600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n multinode-788600-m03 sudo cat                                                                    | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:21 PDT | 28 Apr 24 18:21 PDT |
	|         | /home/docker/cp-test_multinode-788600-m02_multinode-788600-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-788600 cp testdata\cp-test.txt                                                                                 | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:21 PDT | 28 Apr 24 18:21 PDT |
	|         | multinode-788600-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n                                                                                                  | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:21 PDT | 28 Apr 24 18:21 PDT |
	|         | multinode-788600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-788600 cp multinode-788600-m03:/home/docker/cp-test.txt                                                        | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:21 PDT | 28 Apr 24 18:22 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile2232407997\001\cp-test_multinode-788600-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n                                                                                                  | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:22 PDT | 28 Apr 24 18:22 PDT |
	|         | multinode-788600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-788600 cp multinode-788600-m03:/home/docker/cp-test.txt                                                        | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:22 PDT | 28 Apr 24 18:22 PDT |
	|         | multinode-788600:/home/docker/cp-test_multinode-788600-m03_multinode-788600.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n                                                                                                  | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:22 PDT | 28 Apr 24 18:22 PDT |
	|         | multinode-788600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n multinode-788600 sudo cat                                                                        | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:22 PDT | 28 Apr 24 18:22 PDT |
	|         | /home/docker/cp-test_multinode-788600-m03_multinode-788600.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-788600 cp multinode-788600-m03:/home/docker/cp-test.txt                                                        | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:22 PDT | 28 Apr 24 18:23 PDT |
	|         | multinode-788600-m02:/home/docker/cp-test_multinode-788600-m03_multinode-788600-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n                                                                                                  | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:23 PDT | 28 Apr 24 18:23 PDT |
	|         | multinode-788600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-788600 ssh -n multinode-788600-m02 sudo cat                                                                    | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:23 PDT | 28 Apr 24 18:23 PDT |
	|         | /home/docker/cp-test_multinode-788600-m03_multinode-788600-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-788600 node stop m03                                                                                           | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:23 PDT | 28 Apr 24 18:23 PDT |
	| node    | multinode-788600 node start                                                                                              | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:24 PDT | 28 Apr 24 18:26 PDT |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-788600                                                                                                 | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:27 PDT |                     |
	| stop    | -p multinode-788600                                                                                                      | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:27 PDT | 28 Apr 24 18:29 PDT |
	| start   | -p multinode-788600                                                                                                      | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:29 PDT |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	| node    | list -p multinode-788600                                                                                                 | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:34 PDT |                     |
	| node    | multinode-788600 node delete                                                                                             | multinode-788600 | minikube1\jenkins | v1.33.0 | 28 Apr 24 18:34 PDT | 28 Apr 24 18:35 PDT |
	|         | m03                                                                                                                      |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 18:29:06
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 18:29:06.809727    5100 out.go:291] Setting OutFile to fd 1908 ...
	I0428 18:29:06.810353    5100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 18:29:06.810353    5100 out.go:304] Setting ErrFile to fd 1912...
	I0428 18:29:06.810353    5100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 18:29:06.834778    5100 out.go:298] Setting JSON to false
	I0428 18:29:06.838611    5100 start.go:129] hostinfo: {"hostname":"minikube1","uptime":11589,"bootTime":1714342556,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 18:29:06.838611    5100 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 18:29:06.940529    5100 out.go:177] * [multinode-788600] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 18:29:07.030586    5100 notify.go:220] Checking for updates...
	I0428 18:29:07.077632    5100 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:29:07.374230    5100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 18:29:07.485070    5100 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 18:29:07.638229    5100 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 18:29:07.772014    5100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 18:29:07.826039    5100 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:29:07.826481    5100 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 18:29:13.079444    5100 out.go:177] * Using the hyperv driver based on existing profile
	I0428 18:29:13.183795    5100 start.go:297] selected driver: hyperv
	I0428 18:29:13.183795    5100 start.go:901] validating driver "hyperv" against &{Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.231.169 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.230.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.237.64 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 18:29:13.184921    5100 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 18:29:13.238392    5100 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 18:29:13.239401    5100 cni.go:84] Creating CNI manager for ""
	I0428 18:29:13.239401    5100 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0428 18:29:13.239658    5100 start.go:340] cluster config:
	{Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-788600 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.231.169 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.230.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.237.64 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 18:29:13.239658    5100 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 18:29:13.267965    5100 out.go:177] * Starting "multinode-788600" primary control-plane node in "multinode-788600" cluster
	I0428 18:29:13.273325    5100 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 18:29:13.273757    5100 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 18:29:13.273855    5100 cache.go:56] Caching tarball of preloaded images
	I0428 18:29:13.274319    5100 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 18:29:13.274564    5100 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 18:29:13.274592    5100 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:29:13.277394    5100 start.go:360] acquireMachinesLock for multinode-788600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 18:29:13.277394    5100 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-788600"
	I0428 18:29:13.278010    5100 start.go:96] Skipping create...Using existing machine configuration
	I0428 18:29:13.278010    5100 fix.go:54] fixHost starting: 
	I0428 18:29:13.278669    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:15.841355    5100 main.go:141] libmachine: [stdout =====>] : Off
	
	I0428 18:29:15.841355    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:15.841437    5100 fix.go:112] recreateIfNeeded on multinode-788600: state=Stopped err=<nil>
	W0428 18:29:15.841437    5100 fix.go:138] unexpected machine state, will restart: <nil>
	I0428 18:29:15.844029    5100 out.go:177] * Restarting existing hyperv VM for "multinode-788600" ...
	I0428 18:29:15.847206    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-788600
	I0428 18:29:18.788290    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:29:18.788290    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:18.788290    5100 main.go:141] libmachine: Waiting for host to start...
	I0428 18:29:18.788290    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:20.894990    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:20.894990    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:20.894990    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:23.329935    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:29:23.329986    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:24.337456    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:26.424769    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:26.424769    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:26.424959    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:28.835446    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:29:28.835446    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:29.845210    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:31.915507    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:31.915507    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:31.916194    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:34.321357    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:29:34.321830    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:35.322335    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:37.477391    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:37.477391    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:37.477391    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:39.926983    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:29:39.926983    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:40.928783    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:43.017582    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:43.018601    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:43.018670    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:45.467215    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:29:45.467701    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:45.470855    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:47.452061    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:47.453391    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:47.453481    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:49.918620    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:29:49.918620    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:49.919129    5100 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:29:49.921224    5100 machine.go:94] provisionDockerMachine start ...
	I0428 18:29:49.921854    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:51.906534    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:51.906962    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:51.906962    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:54.344777    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:29:54.345162    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:54.351253    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:29:54.351970    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:29:54.351970    5100 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 18:29:54.482939    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 18:29:54.483063    5100 buildroot.go:166] provisioning hostname "multinode-788600"
	I0428 18:29:54.483182    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:29:56.467562    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:29:56.467562    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:56.467562    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:29:58.861415    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:29:58.861500    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:29:58.866474    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:29:58.867158    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:29:58.867158    5100 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-788600 && echo "multinode-788600" | sudo tee /etc/hostname
	I0428 18:29:59.026469    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-788600
	
	I0428 18:29:59.027057    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:01.078535    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:01.078960    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:01.079062    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:03.473105    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:03.473105    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:03.480109    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:03.480643    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:03.480643    5100 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-788600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-788600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-788600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 18:30:03.632326    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 18:30:03.632436    5100 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 18:30:03.632436    5100 buildroot.go:174] setting up certificates
	I0428 18:30:03.632533    5100 provision.go:84] configureAuth start
	I0428 18:30:03.632662    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:05.623591    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:05.623591    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:05.623674    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:07.995919    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:07.996008    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:07.996008    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:09.994705    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:09.994705    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:09.994978    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:12.476810    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:12.476810    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:12.476810    5100 provision.go:143] copyHostCerts
	I0428 18:30:12.477065    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 18:30:12.477065    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 18:30:12.477065    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 18:30:12.477997    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 18:30:12.479104    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 18:30:12.479438    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 18:30:12.479438    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 18:30:12.479915    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 18:30:12.480977    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 18:30:12.481170    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 18:30:12.481170    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 18:30:12.481170    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 18:30:12.482569    5100 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-788600 san=[127.0.0.1 172.27.239.170 localhost minikube multinode-788600]
	I0428 18:30:12.565240    5100 provision.go:177] copyRemoteCerts
	I0428 18:30:12.578456    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 18:30:12.578546    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:14.563247    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:14.563247    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:14.564084    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:17.004731    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:17.004884    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:17.005001    5100 sshutil.go:53] new ssh client: &{IP:172.27.239.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:30:17.120514    5100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5420479s)
	I0428 18:30:17.120569    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 18:30:17.121103    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 18:30:17.169984    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 18:30:17.170584    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0428 18:30:17.216472    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 18:30:17.216472    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0428 18:30:17.262921    5100 provision.go:87] duration metric: took 13.630358s to configureAuth
	I0428 18:30:17.262921    5100 buildroot.go:189] setting minikube options for container-runtime
	I0428 18:30:17.263897    5100 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:30:17.264012    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:19.259871    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:19.259871    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:19.260050    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:21.723377    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:21.723454    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:21.729319    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:21.730083    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:21.730083    5100 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 18:30:21.872016    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 18:30:21.872016    5100 buildroot.go:70] root file system type: tmpfs
	I0428 18:30:21.872016    5100 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 18:30:21.872016    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:23.896924    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:23.896924    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:23.896924    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:26.313949    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:26.313949    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:26.322783    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:26.322938    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:26.322938    5100 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 18:30:26.486115    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 18:30:26.486115    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:28.470749    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:28.470749    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:28.470749    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:30.893142    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:30.893142    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:30.900075    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:30.900075    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:30.900075    5100 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 18:30:33.420018    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 18:30:33.420018    5100 machine.go:97] duration metric: took 43.498168s to provisionDockerMachine
	I0428 18:30:33.420018    5100 start.go:293] postStartSetup for "multinode-788600" (driver="hyperv")
	I0428 18:30:33.420018    5100 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 18:30:33.433580    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 18:30:33.433580    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:35.421597    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:35.421597    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:35.421967    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:37.810277    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:37.811012    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:37.811315    5100 sshutil.go:53] new ssh client: &{IP:172.27.239.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:30:37.920287    5100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4866971s)
	I0428 18:30:37.932767    5100 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 18:30:37.939254    5100 command_runner.go:130] > NAME=Buildroot
	I0428 18:30:37.939254    5100 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0428 18:30:37.939254    5100 command_runner.go:130] > ID=buildroot
	I0428 18:30:37.939254    5100 command_runner.go:130] > VERSION_ID=2023.02.9
	I0428 18:30:37.939254    5100 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0428 18:30:37.939254    5100 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 18:30:37.939254    5100 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 18:30:37.939952    5100 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 18:30:37.940475    5100 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 18:30:37.940475    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 18:30:37.952512    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 18:30:37.969990    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 18:30:38.017497    5100 start.go:296] duration metric: took 4.5974689s for postStartSetup
	I0428 18:30:38.018511    5100 fix.go:56] duration metric: took 1m24.7403132s for fixHost
	I0428 18:30:38.018511    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:40.002285    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:40.002569    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:40.002569    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:42.426765    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:42.427054    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:42.433213    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:42.433408    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:42.433408    5100 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 18:30:42.568495    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714354242.563104735
	
	I0428 18:30:42.568584    5100 fix.go:216] guest clock: 1714354242.563104735
	I0428 18:30:42.568584    5100 fix.go:229] Guest: 2024-04-28 18:30:42.563104735 -0700 PDT Remote: 2024-04-28 18:30:38.018511 -0700 PDT m=+91.312813201 (delta=4.544593735s)
	I0428 18:30:42.568783    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:44.528614    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:44.528614    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:44.529235    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:46.913452    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:46.913716    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:46.920153    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:30:46.920882    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.239.170 22 <nil> <nil>}
	I0428 18:30:46.921041    5100 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714354242
	I0428 18:30:47.066116    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 01:30:42 UTC 2024
	
	I0428 18:30:47.066116    5100 fix.go:236] clock set: Mon Apr 29 01:30:42 UTC 2024
	 (err=<nil>)
	I0428 18:30:47.066675    5100 start.go:83] releasing machines lock for "multinode-788600", held for 1m33.788514s
	I0428 18:30:47.066769    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:49.059891    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:49.060388    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:49.060388    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:51.541826    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:51.541826    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:51.545987    5100 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 18:30:51.546223    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:51.556964    5100 ssh_runner.go:195] Run: cat /version.json
	I0428 18:30:51.556964    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:30:53.612244    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:53.612244    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:53.612244    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:53.622682    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:30:53.622789    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:53.622943    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:30:56.119241    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:56.119241    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:56.120395    5100 sshutil.go:53] new ssh client: &{IP:172.27.239.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:30:56.153523    5100 main.go:141] libmachine: [stdout =====>] : 172.27.239.170
	
	I0428 18:30:56.154538    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:30:56.154788    5100 sshutil.go:53] new ssh client: &{IP:172.27.239.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:30:56.212733    5100 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0428 18:30:56.212822    5100 ssh_runner.go:235] Completed: cat /version.json: (4.6558463s)
	I0428 18:30:56.227331    5100 ssh_runner.go:195] Run: systemctl --version
	I0428 18:30:56.298961    5100 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0428 18:30:56.299087    5100 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7530013s)
	I0428 18:30:56.299087    5100 command_runner.go:130] > systemd 252 (252)
	I0428 18:30:56.299087    5100 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0428 18:30:56.311091    5100 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 18:30:56.322712    5100 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0428 18:30:56.323363    5100 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 18:30:56.335996    5100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 18:30:56.368726    5100 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0428 18:30:56.368854    5100 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 18:30:56.368894    5100 start.go:494] detecting cgroup driver to use...
	I0428 18:30:56.369158    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 18:30:56.408119    5100 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0428 18:30:56.420239    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 18:30:56.450407    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 18:30:56.468615    5100 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 18:30:56.483087    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 18:30:56.518413    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 18:30:56.551580    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 18:30:56.590655    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 18:30:56.627626    5100 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 18:30:56.668610    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 18:30:56.707360    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 18:30:56.741109    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 18:30:56.772199    5100 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 18:30:56.789910    5100 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0428 18:30:56.802591    5100 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 18:30:56.831586    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:30:57.029306    5100 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 18:30:57.065129    5100 start.go:494] detecting cgroup driver to use...
	I0428 18:30:57.081225    5100 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 18:30:57.104967    5100 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0428 18:30:57.104967    5100 command_runner.go:130] > [Unit]
	I0428 18:30:57.104967    5100 command_runner.go:130] > Description=Docker Application Container Engine
	I0428 18:30:57.104967    5100 command_runner.go:130] > Documentation=https://docs.docker.com
	I0428 18:30:57.105037    5100 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0428 18:30:57.105037    5100 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0428 18:30:57.105037    5100 command_runner.go:130] > StartLimitBurst=3
	I0428 18:30:57.105073    5100 command_runner.go:130] > StartLimitIntervalSec=60
	I0428 18:30:57.105073    5100 command_runner.go:130] > [Service]
	I0428 18:30:57.105117    5100 command_runner.go:130] > Type=notify
	I0428 18:30:57.105117    5100 command_runner.go:130] > Restart=on-failure
	I0428 18:30:57.105117    5100 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0428 18:30:57.105156    5100 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0428 18:30:57.105156    5100 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0428 18:30:57.105210    5100 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0428 18:30:57.105210    5100 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0428 18:30:57.105250    5100 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0428 18:30:57.105250    5100 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0428 18:30:57.105301    5100 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0428 18:30:57.105357    5100 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0428 18:30:57.105357    5100 command_runner.go:130] > ExecStart=
	I0428 18:30:57.105357    5100 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0428 18:30:57.105357    5100 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0428 18:30:57.105357    5100 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0428 18:30:57.105357    5100 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0428 18:30:57.105357    5100 command_runner.go:130] > LimitNOFILE=infinity
	I0428 18:30:57.105357    5100 command_runner.go:130] > LimitNPROC=infinity
	I0428 18:30:57.105357    5100 command_runner.go:130] > LimitCORE=infinity
	I0428 18:30:57.105357    5100 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0428 18:30:57.105357    5100 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0428 18:30:57.105357    5100 command_runner.go:130] > TasksMax=infinity
	I0428 18:30:57.105357    5100 command_runner.go:130] > TimeoutStartSec=0
	I0428 18:30:57.105357    5100 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0428 18:30:57.105357    5100 command_runner.go:130] > Delegate=yes
	I0428 18:30:57.105357    5100 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0428 18:30:57.105357    5100 command_runner.go:130] > KillMode=process
	I0428 18:30:57.105357    5100 command_runner.go:130] > [Install]
	I0428 18:30:57.105357    5100 command_runner.go:130] > WantedBy=multi-user.target
	I0428 18:30:57.118659    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 18:30:57.153965    5100 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 18:30:57.204253    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 18:30:57.240015    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 18:30:57.277276    5100 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 18:30:57.345718    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 18:30:57.371346    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 18:30:57.409737    5100 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0428 18:30:57.423205    5100 ssh_runner.go:195] Run: which cri-dockerd
	I0428 18:30:57.430233    5100 command_runner.go:130] > /usr/bin/cri-dockerd
	I0428 18:30:57.441325    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 18:30:57.458054    5100 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 18:30:57.502947    5100 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 18:30:57.700154    5100 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 18:30:57.882896    5100 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 18:30:57.883180    5100 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 18:30:57.927721    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:30:58.124953    5100 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 18:31:00.770105    5100 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6450046s)
	I0428 18:31:00.781386    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0428 18:31:00.815860    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 18:31:00.858671    5100 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0428 18:31:01.050250    5100 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0428 18:31:01.245194    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:31:01.445475    5100 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0428 18:31:01.496426    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0428 18:31:01.534763    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:31:01.718829    5100 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0428 18:31:01.836605    5100 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0428 18:31:01.857291    5100 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0428 18:31:01.874846    5100 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0428 18:31:01.874846    5100 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0428 18:31:01.874846    5100 command_runner.go:130] > Device: 0,22	Inode: 858         Links: 1
	I0428 18:31:01.874846    5100 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0428 18:31:01.874846    5100 command_runner.go:130] > Access: 2024-04-29 01:31:01.743369559 +0000
	I0428 18:31:01.874846    5100 command_runner.go:130] > Modify: 2024-04-29 01:31:01.743369559 +0000
	I0428 18:31:01.874846    5100 command_runner.go:130] > Change: 2024-04-29 01:31:01.748369612 +0000
	I0428 18:31:01.874846    5100 command_runner.go:130] >  Birth: -
	I0428 18:31:01.874846    5100 start.go:562] Will wait 60s for crictl version
	I0428 18:31:01.887754    5100 ssh_runner.go:195] Run: which crictl
	I0428 18:31:01.894982    5100 command_runner.go:130] > /usr/bin/crictl
	I0428 18:31:01.907488    5100 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 18:31:01.975356    5100 command_runner.go:130] > Version:  0.1.0
	I0428 18:31:01.975356    5100 command_runner.go:130] > RuntimeName:  docker
	I0428 18:31:01.975356    5100 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0428 18:31:01.975356    5100 command_runner.go:130] > RuntimeApiVersion:  v1
	I0428 18:31:01.975356    5100 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0428 18:31:01.984920    5100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 18:31:02.021960    5100 command_runner.go:130] > 26.0.2
	I0428 18:31:02.031724    5100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0428 18:31:02.062921    5100 command_runner.go:130] > 26.0.2
	I0428 18:31:02.067738    5100 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0428 18:31:02.067738    5100 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0428 18:31:02.069125    5100 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0428 18:31:02.072991    5100 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0428 18:31:02.072991    5100 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0428 18:31:02.072991    5100 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d9:46:b5 Flags:up|broadcast|multicast|running}
	I0428 18:31:02.073587    5100 ip.go:210] interface addr: fe80::ddcc:c7f1:f829:ae2f/64
	I0428 18:31:02.073587    5100 ip.go:210] interface addr: 172.27.224.1/20
	I0428 18:31:02.090160    5100 ssh_runner.go:195] Run: grep 172.27.224.1	host.minikube.internal$ /etc/hosts
	I0428 18:31:02.096353    5100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 18:31:02.117037    5100 kubeadm.go:877] updating cluster {Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.239.170 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.230.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.237.64 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 18:31:02.117328    5100 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 18:31:02.126708    5100 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0428 18:31:02.150678    5100 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0428 18:31:02.150678    5100 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0428 18:31:02.151177    5100 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 18:31:02.151177    5100 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0428 18:31:02.151177    5100 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0428 18:31:02.151177    5100 docker.go:615] Images already preloaded, skipping extraction
	I0428 18:31:02.161895    5100 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0428 18:31:02.183468    5100 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0428 18:31:02.183468    5100 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0428 18:31:02.183468    5100 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 18:31:02.183468    5100 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0428 18:31:02.183468    5100 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0428 18:31:02.183468    5100 cache_images.go:84] Images are preloaded, skipping loading
	I0428 18:31:02.183468    5100 kubeadm.go:928] updating node { 172.27.239.170 8443 v1.30.0 docker true true} ...
	I0428 18:31:02.183468    5100 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-788600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.239.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 18:31:02.192446    5100 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0428 18:31:02.227627    5100 command_runner.go:130] > cgroupfs
	I0428 18:31:02.227627    5100 cni.go:84] Creating CNI manager for ""
	I0428 18:31:02.227627    5100 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0428 18:31:02.227627    5100 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 18:31:02.227627    5100 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.239.170 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-788600 NodeName:multinode-788600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.239.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.239.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 18:31:02.228352    5100 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.239.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-788600"
	  kubeletExtraArgs:
	    node-ip: 172.27.239.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.239.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 18:31:02.243724    5100 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 18:31:02.263782    5100 command_runner.go:130] > kubeadm
	I0428 18:31:02.263782    5100 command_runner.go:130] > kubectl
	I0428 18:31:02.263782    5100 command_runner.go:130] > kubelet
	I0428 18:31:02.263782    5100 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 18:31:02.277865    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0428 18:31:02.295334    5100 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0428 18:31:02.327593    5100 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 18:31:02.355898    5100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0428 18:31:02.400601    5100 ssh_runner.go:195] Run: grep 172.27.239.170	control-plane.minikube.internal$ /etc/hosts
	I0428 18:31:02.407693    5100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.239.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 18:31:02.442067    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:31:02.626741    5100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 18:31:02.665784    5100 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600 for IP: 172.27.239.170
	I0428 18:31:02.665784    5100 certs.go:194] generating shared ca certs ...
	I0428 18:31:02.665784    5100 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:02.666397    5100 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0428 18:31:02.667047    5100 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0428 18:31:02.667047    5100 certs.go:256] generating profile certs ...
	I0428 18:31:02.667730    5100 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\client.key
	I0428 18:31:02.668417    5100 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key.bf279c66
	I0428 18:31:02.668505    5100 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt.bf279c66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.239.170]
	I0428 18:31:03.091055    5100 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt.bf279c66 ...
	I0428 18:31:03.091055    5100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt.bf279c66: {Name:mkaf1a9c903a6c9cf9004a34772c2d8b3ee15342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:03.093044    5100 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key.bf279c66 ...
	I0428 18:31:03.093044    5100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key.bf279c66: {Name:mk024a6f259c1625f6490ba1e52b63b460f3073d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:03.094536    5100 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt.bf279c66 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt
	I0428 18:31:03.107123    5100 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key.bf279c66 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key
	I0428 18:31:03.109129    5100 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.key
	I0428 18:31:03.109129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 18:31:03.109129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0428 18:31:03.109129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 18:31:03.109129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 18:31:03.110129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 18:31:03.110129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 18:31:03.110129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 18:31:03.110129    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 18:31:03.110129    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem (1338 bytes)
	W0428 18:31:03.111127    5100 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228_empty.pem, impossibly tiny 0 bytes
	I0428 18:31:03.111127    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0428 18:31:03.111127    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0428 18:31:03.111127    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0428 18:31:03.112121    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0428 18:31:03.112121    5100 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem (1708 bytes)
	I0428 18:31:03.112121    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem -> /usr/share/ca-certificates/3228.pem
	I0428 18:31:03.112121    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /usr/share/ca-certificates/32282.pem
	I0428 18:31:03.112121    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:31:03.113143    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 18:31:03.164538    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0428 18:31:03.213913    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 18:31:03.259463    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 18:31:03.307159    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0428 18:31:03.356708    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0428 18:31:03.409218    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 18:31:03.461775    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0428 18:31:03.502141    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\3228.pem --> /usr/share/ca-certificates/3228.pem (1338 bytes)
	I0428 18:31:03.549108    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /usr/share/ca-certificates/32282.pem (1708 bytes)
	I0428 18:31:03.597203    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 18:31:03.642354    5100 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 18:31:03.686876    5100 ssh_runner.go:195] Run: openssl version
	I0428 18:31:03.696135    5100 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0428 18:31:03.708139    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3228.pem && ln -fs /usr/share/ca-certificates/3228.pem /etc/ssl/certs/3228.pem"
	I0428 18:31:03.745183    5100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3228.pem
	I0428 18:31:03.753163    5100 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 18:31:03.753526    5100 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:26 /usr/share/ca-certificates/3228.pem
	I0428 18:31:03.765193    5100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3228.pem
	I0428 18:31:03.774235    5100 command_runner.go:130] > 51391683
	I0428 18:31:03.786397    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3228.pem /etc/ssl/certs/51391683.0"
	I0428 18:31:03.814386    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32282.pem && ln -fs /usr/share/ca-certificates/32282.pem /etc/ssl/certs/32282.pem"
	I0428 18:31:03.850195    5100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32282.pem
	I0428 18:31:03.857810    5100 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 18:31:03.857810    5100 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:26 /usr/share/ca-certificates/32282.pem
	I0428 18:31:03.870129    5100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32282.pem
	I0428 18:31:03.878498    5100 command_runner.go:130] > 3ec20f2e
	I0428 18:31:03.890751    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32282.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 18:31:03.922266    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 18:31:03.952546    5100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:31:03.960640    5100 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:31:03.960640    5100 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:31:03.973542    5100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 18:31:03.982547    5100 command_runner.go:130] > b5213941
	I0428 18:31:03.992543    5100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 18:31:04.020878    5100 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 18:31:04.027800    5100 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 18:31:04.027800    5100 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0428 18:31:04.027800    5100 command_runner.go:130] > Device: 8,1	Inode: 9431378     Links: 1
	I0428 18:31:04.027800    5100 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0428 18:31:04.027800    5100 command_runner.go:130] > Access: 2024-04-29 01:08:36.420738580 +0000
	I0428 18:31:04.027800    5100 command_runner.go:130] > Modify: 2024-04-29 01:08:36.420738580 +0000
	I0428 18:31:04.027800    5100 command_runner.go:130] > Change: 2024-04-29 01:08:36.420738580 +0000
	I0428 18:31:04.027800    5100 command_runner.go:130] >  Birth: 2024-04-29 01:08:36.420738580 +0000
	I0428 18:31:04.039221    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0428 18:31:04.049656    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.061648    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0428 18:31:04.075450    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.089519    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0428 18:31:04.099116    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.110882    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0428 18:31:04.120974    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.133464    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0428 18:31:04.146142    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.158268    5100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0428 18:31:04.167665    5100 command_runner.go:130] > Certificate will not expire
	I0428 18:31:04.168193    5100 kubeadm.go:391] StartCluster: {Name:multinode-788600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-788600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.239.170 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.230.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.237.64 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 18:31:04.178224    5100 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 18:31:04.213190    5100 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 18:31:04.233991    5100 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0428 18:31:04.233991    5100 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0428 18:31:04.233991    5100 command_runner.go:130] > /var/lib/minikube/etcd:
	I0428 18:31:04.233991    5100 command_runner.go:130] > member
	W0428 18:31:04.233991    5100 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0428 18:31:04.233991    5100 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0428 18:31:04.233991    5100 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0428 18:31:04.244993    5100 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0428 18:31:04.263105    5100 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0428 18:31:04.263871    5100 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-788600" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:31:04.264562    5100 kubeconfig.go:62] C:\Users\jenkins.minikube1\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-788600" cluster setting kubeconfig missing "multinode-788600" context setting]
	I0428 18:31:04.265326    5100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:04.279100    5100 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:31:04.279824    5100 kapi.go:59] client config for multinode-788600: &rest.Config{Host:"https://172.27.239.170:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-788600/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-788600/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23e5ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 18:31:04.281162    5100 cert_rotation.go:137] Starting client certificate rotation controller
	I0428 18:31:04.294422    5100 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0428 18:31:04.312988    5100 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0428 18:31:04.312988    5100 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0428 18:31:04.312988    5100 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0428 18:31:04.312988    5100 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0428 18:31:04.312988    5100 command_runner.go:130] >  kind: InitConfiguration
	I0428 18:31:04.312988    5100 command_runner.go:130] >  localAPIEndpoint:
	I0428 18:31:04.312988    5100 command_runner.go:130] > -  advertiseAddress: 172.27.231.169
	I0428 18:31:04.312988    5100 command_runner.go:130] > +  advertiseAddress: 172.27.239.170
	I0428 18:31:04.312988    5100 command_runner.go:130] >    bindPort: 8443
	I0428 18:31:04.312988    5100 command_runner.go:130] >  bootstrapTokens:
	I0428 18:31:04.312988    5100 command_runner.go:130] >    - groups:
	I0428 18:31:04.312988    5100 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0428 18:31:04.312988    5100 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0428 18:31:04.312988    5100 command_runner.go:130] >    name: "multinode-788600"
	I0428 18:31:04.312988    5100 command_runner.go:130] >    kubeletExtraArgs:
	I0428 18:31:04.312988    5100 command_runner.go:130] > -    node-ip: 172.27.231.169
	I0428 18:31:04.312988    5100 command_runner.go:130] > +    node-ip: 172.27.239.170
	I0428 18:31:04.312988    5100 command_runner.go:130] >    taints: []
	I0428 18:31:04.312988    5100 command_runner.go:130] >  ---
	I0428 18:31:04.312988    5100 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0428 18:31:04.312988    5100 command_runner.go:130] >  kind: ClusterConfiguration
	I0428 18:31:04.312988    5100 command_runner.go:130] >  apiServer:
	I0428 18:31:04.312988    5100 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.27.231.169"]
	I0428 18:31:04.312988    5100 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.27.239.170"]
	I0428 18:31:04.312988    5100 command_runner.go:130] >    extraArgs:
	I0428 18:31:04.312988    5100 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0428 18:31:04.313995    5100 command_runner.go:130] >  controllerManager:
	I0428 18:31:04.313995    5100 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.27.231.169
	+  advertiseAddress: 172.27.239.170
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-788600"
	   kubeletExtraArgs:
	-    node-ip: 172.27.231.169
	+    node-ip: 172.27.239.170
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.27.231.169"]
	+  certSANs: ["127.0.0.1", "localhost", "172.27.239.170"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0428 18:31:04.313995    5100 kubeadm.go:1154] stopping kube-system containers ...
	I0428 18:31:04.322985    5100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0428 18:31:04.353225    5100 command_runner.go:130] > 64e6fcf4a3f2
	I0428 18:31:04.353225    5100 command_runner.go:130] > 16ea9b9acd26
	I0428 18:31:04.353225    5100 command_runner.go:130] > 20d6a18478fc
	I0428 18:31:04.353225    5100 command_runner.go:130] > 70af634f6134
	I0428 18:31:04.353225    5100 command_runner.go:130] > 33e59494d8be
	I0428 18:31:04.353225    5100 command_runner.go:130] > 8542b2c39cf5
	I0428 18:31:04.353225    5100 command_runner.go:130] > 776d075f3716
	I0428 18:31:04.353225    5100 command_runner.go:130] > d1342e9d7111
	I0428 18:31:04.353225    5100 command_runner.go:130] > d55fefd692cf
	I0428 18:31:04.353225    5100 command_runner.go:130] > e148c0cdbae0
	I0428 18:31:04.353225    5100 command_runner.go:130] > edb2c636ad5d
	I0428 18:31:04.353225    5100 command_runner.go:130] > 27388b03fb26
	I0428 18:31:04.353225    5100 command_runner.go:130] > 038a267a1caf
	I0428 18:31:04.353225    5100 command_runner.go:130] > 9ffe1b8b41e4
	I0428 18:31:04.353225    5100 command_runner.go:130] > 8328e1b41d78
	I0428 18:31:04.353225    5100 command_runner.go:130] > 26381d4606b5
	I0428 18:31:04.354491    5100 docker.go:483] Stopping containers: [64e6fcf4a3f2 16ea9b9acd26 20d6a18478fc 70af634f6134 33e59494d8be 8542b2c39cf5 776d075f3716 d1342e9d7111 d55fefd692cf e148c0cdbae0 edb2c636ad5d 27388b03fb26 038a267a1caf 9ffe1b8b41e4 8328e1b41d78 26381d4606b5]
	I0428 18:31:04.364390    5100 ssh_runner.go:195] Run: docker stop 64e6fcf4a3f2 16ea9b9acd26 20d6a18478fc 70af634f6134 33e59494d8be 8542b2c39cf5 776d075f3716 d1342e9d7111 d55fefd692cf e148c0cdbae0 edb2c636ad5d 27388b03fb26 038a267a1caf 9ffe1b8b41e4 8328e1b41d78 26381d4606b5
	I0428 18:31:04.397389    5100 command_runner.go:130] > 64e6fcf4a3f2
	I0428 18:31:04.397389    5100 command_runner.go:130] > 16ea9b9acd26
	I0428 18:31:04.397539    5100 command_runner.go:130] > 20d6a18478fc
	I0428 18:31:04.397539    5100 command_runner.go:130] > 70af634f6134
	I0428 18:31:04.397539    5100 command_runner.go:130] > 33e59494d8be
	I0428 18:31:04.397539    5100 command_runner.go:130] > 8542b2c39cf5
	I0428 18:31:04.397539    5100 command_runner.go:130] > 776d075f3716
	I0428 18:31:04.397539    5100 command_runner.go:130] > d1342e9d7111
	I0428 18:31:04.397539    5100 command_runner.go:130] > d55fefd692cf
	I0428 18:31:04.397619    5100 command_runner.go:130] > e148c0cdbae0
	I0428 18:31:04.397619    5100 command_runner.go:130] > edb2c636ad5d
	I0428 18:31:04.397619    5100 command_runner.go:130] > 27388b03fb26
	I0428 18:31:04.397619    5100 command_runner.go:130] > 038a267a1caf
	I0428 18:31:04.397619    5100 command_runner.go:130] > 9ffe1b8b41e4
	I0428 18:31:04.397619    5100 command_runner.go:130] > 8328e1b41d78
	I0428 18:31:04.397619    5100 command_runner.go:130] > 26381d4606b5
	I0428 18:31:04.410385    5100 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0428 18:31:04.456046    5100 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 18:31:04.472006    5100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0428 18:31:04.472006    5100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0428 18:31:04.472993    5100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0428 18:31:04.472993    5100 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 18:31:04.472993    5100 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 18:31:04.472993    5100 kubeadm.go:156] found existing configuration files:
	
	I0428 18:31:04.484113    5100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 18:31:04.499059    5100 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 18:31:04.499059    5100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 18:31:04.510719    5100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 18:31:04.543169    5100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 18:31:04.557731    5100 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 18:31:04.558863    5100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 18:31:04.571495    5100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 18:31:04.601871    5100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 18:31:04.617538    5100 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 18:31:04.617538    5100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 18:31:04.633328    5100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 18:31:04.666719    5100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 18:31:04.682759    5100 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 18:31:04.682759    5100 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 18:31:04.694102    5100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 18:31:04.724740    5100 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 18:31:04.743715    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:05.046800    5100 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0428 18:31:05.046916    5100 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0428 18:31:05.047042    5100 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0428 18:31:05.047042    5100 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0428 18:31:05.047042    5100 command_runner.go:130] > [certs] Using the existing "sa" key
	I0428 18:31:05.047042    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:05.789073    5100 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 18:31:05.789073    5100 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 18:31:05.789073    5100 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 18:31:05.789220    5100 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 18:31:05.789220    5100 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 18:31:05.789220    5100 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 18:31:05.789220    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:06.089406    5100 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 18:31:06.089521    5100 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 18:31:06.089521    5100 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0428 18:31:06.089521    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:06.200973    5100 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 18:31:06.200973    5100 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 18:31:06.200973    5100 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 18:31:06.200973    5100 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 18:31:06.200973    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:06.335221    5100 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 18:31:06.335297    5100 api_server.go:52] waiting for apiserver process to appear ...
	I0428 18:31:06.352189    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:06.860779    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:07.355397    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:07.859488    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:08.350929    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:08.376581    5100 command_runner.go:130] > 1873
	I0428 18:31:08.377248    5100 api_server.go:72] duration metric: took 2.0419465s to wait for apiserver process to appear ...
	I0428 18:31:08.377378    5100 api_server.go:88] waiting for apiserver healthz status ...
	I0428 18:31:08.377378    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:11.562154    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0428 18:31:11.562345    5100 api_server.go:103] status: https://172.27.239.170:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0428 18:31:11.562345    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:11.666889    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0428 18:31:11.667094    5100 api_server.go:103] status: https://172.27.239.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0428 18:31:11.892596    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:11.900932    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0428 18:31:11.900932    5100 api_server.go:103] status: https://172.27.239.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0428 18:31:12.378092    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:12.393638    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0428 18:31:12.393764    5100 api_server.go:103] status: https://172.27.239.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0428 18:31:12.886799    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:12.898497    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0428 18:31:12.898581    5100 api_server.go:103] status: https://172.27.239.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0428 18:31:13.392663    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:13.399821    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 200:
	ok
	I0428 18:31:13.400894    5100 round_trippers.go:463] GET https://172.27.239.170:8443/version
	I0428 18:31:13.400978    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:13.400978    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:13.400978    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:13.412818    5100 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0428 18:31:13.412818    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:13.412818    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:13 GMT
	I0428 18:31:13.412818    5100 round_trippers.go:580]     Audit-Id: b0a79bb7-8b25-46f1-b283-4f71e13e3f94
	I0428 18:31:13.412818    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:13.412818    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:13.412818    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:13.412818    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:13.412818    5100 round_trippers.go:580]     Content-Length: 263
	I0428 18:31:13.412818    5100 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0428 18:31:13.412818    5100 api_server.go:141] control plane version: v1.30.0
	I0428 18:31:13.412818    5100 api_server.go:131] duration metric: took 5.0354284s to wait for apiserver health ...
	I0428 18:31:13.412818    5100 cni.go:84] Creating CNI manager for ""
	I0428 18:31:13.412818    5100 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0428 18:31:13.417869    5100 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0428 18:31:13.436044    5100 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0428 18:31:13.445362    5100 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0428 18:31:13.445362    5100 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0428 18:31:13.445362    5100 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0428 18:31:13.445505    5100 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0428 18:31:13.445505    5100 command_runner.go:130] > Access: 2024-04-29 01:29:43.865545900 +0000
	I0428 18:31:13.445555    5100 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0428 18:31:13.445631    5100 command_runner.go:130] > Change: 2024-04-28 18:29:34.726000000 +0000
	I0428 18:31:13.445631    5100 command_runner.go:130] >  Birth: -
	I0428 18:31:13.445951    5100 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0428 18:31:13.445951    5100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0428 18:31:13.547488    5100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0428 18:31:14.632537    5100 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0428 18:31:14.632691    5100 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0428 18:31:14.632691    5100 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0428 18:31:14.632718    5100 command_runner.go:130] > daemonset.apps/kindnet configured
	I0428 18:31:14.632809    5100 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.0852276s)
	I0428 18:31:14.632965    5100 system_pods.go:43] waiting for kube-system pods to appear ...
	I0428 18:31:14.633166    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:14.633166    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:14.633166    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:14.633166    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:14.639871    5100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:31:14.639871    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:14.640274    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:14.640274    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:14.640274    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:14.640274    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:14 GMT
	I0428 18:31:14.640274    5100 round_trippers.go:580]     Audit-Id: 248bcd12-c9b2-4c03-974b-33681c1e3b65
	I0428 18:31:14.640274    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:14.642794    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1806"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87778 chars]
	I0428 18:31:14.649754    5100 system_pods.go:59] 12 kube-system pods found
	I0428 18:31:14.650290    5100 system_pods.go:61] "coredns-7db6d8ff4d-rp2lx" [d6f6f38d-f1f3-454e-a469-c76c8fbc5d99] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0428 18:31:14.650290    5100 system_pods.go:61] "etcd-multinode-788600" [f87bd4ae-4a5c-4587-a9e8-d381c5b76c63] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0428 18:31:14.650290    5100 system_pods.go:61] "kindnet-52rrh" [49c6b5f0-286f-4bff-b719-d73a4ea4aaf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0428 18:31:14.650290    5100 system_pods.go:61] "kindnet-hnvm4" [d01265be-d3ee-47dc-9d72-fd68a6a6eacd] Running
	I0428 18:31:14.650290    5100 system_pods.go:61] "kindnet-ms872" [9dffcd3e-2cc0-414f-a465-fe37b80ad4bc] Running
	I0428 18:31:14.650290    5100 system_pods.go:61] "kube-apiserver-multinode-788600" [5ade8d95-5387-4444-95af-604116cf695e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0428 18:31:14.650462    5100 system_pods.go:61] "kube-controller-manager-multinode-788600" [b7d7893e-bd95-4f96-879f-a8378040fc03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0428 18:31:14.650646    5100 system_pods.go:61] "kube-proxy-bkkql" [eccd7725-151c-4770-b99c-cb308b31389c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0428 18:31:14.650646    5100 system_pods.go:61] "kube-proxy-kc8c4" [340b4c9b-449f-4208-846e-dec867826bf7] Running
	I0428 18:31:14.650646    5100 system_pods.go:61] "kube-proxy-sjsfc" [f06aadb7-e646-4105-af2f-0acc4a8ad174] Running
	I0428 18:31:14.650646    5100 system_pods.go:61] "kube-scheduler-multinode-788600" [55bd2888-a3b6-498a-9352-8b15bcc5e545] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0428 18:31:14.650646    5100 system_pods.go:61] "storage-provisioner" [04bc447a-c711-4c23-ad4b-db5fd32b28d2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0428 18:31:14.650646    5100 system_pods.go:74] duration metric: took 17.6807ms to wait for pod list to return data ...
	I0428 18:31:14.650646    5100 node_conditions.go:102] verifying NodePressure condition ...
	I0428 18:31:14.650646    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes
	I0428 18:31:14.650646    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:14.650646    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:14.650646    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:14.657389    5100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:31:14.657389    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:14.657389    5100 round_trippers.go:580]     Audit-Id: 537b24cc-1bc6-426b-ba20-af82c6e285ac
	I0428 18:31:14.657389    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:14.657389    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:14.657389    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:14.657389    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:14.657389    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:14 GMT
	I0428 18:31:14.657389    5100 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1806"},"items":[{"metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15630 chars]
	I0428 18:31:14.659404    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:14.659404    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:14.659404    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:14.659404    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:14.659404    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:14.659404    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:14.659404    5100 node_conditions.go:105] duration metric: took 8.7579ms to run NodePressure ...
	I0428 18:31:14.659404    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0428 18:31:15.095181    5100 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0428 18:31:15.095181    5100 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0428 18:31:15.096193    5100 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0428 18:31:15.096193    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0428 18:31:15.096193    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.096193    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.096193    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.136172    5100 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I0428 18:31:15.136172    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.136172    5100 round_trippers.go:580]     Audit-Id: 65742097-3ca7-436d-bc20-f699a73df0d7
	I0428 18:31:15.136172    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.136172    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.136172    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.136172    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.136172    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.138207    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1812"},"items":[{"metadata":{"name":"etcd-multinode-788600","namespace":"kube-system","uid":"f87bd4ae-4a5c-4587-a9e8-d381c5b76c63","resourceVersion":"1757","creationTimestamp":"2024-04-29T01:31:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.239.170:2379","kubernetes.io/config.hash":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.mirror":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.seen":"2024-04-29T01:31:06.337700959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:31:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0428 18:31:15.139771    5100 kubeadm.go:733] kubelet initialised
	I0428 18:31:15.139771    5100 kubeadm.go:734] duration metric: took 43.5779ms waiting for restarted kubelet to initialise ...
	I0428 18:31:15.139771    5100 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 18:31:15.139771    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:15.139771    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.139771    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.139771    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.145356    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:15.145950    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.145950    5100 round_trippers.go:580]     Audit-Id: 459a1c96-348d-496d-84c8-66eff19f8b17
	I0428 18:31:15.145950    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.145950    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.145950    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.145950    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.146022    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.147048    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1812"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87185 chars]
	I0428 18:31:15.149647    5100 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.150653    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:15.150653    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.150653    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.150653    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.153647    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.153647    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.154132    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.154132    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.154132    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.154132    5100 round_trippers.go:580]     Audit-Id: 00fb04df-3abb-4699-8d39-aaed3f0c4562
	I0428 18:31:15.154132    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.154132    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.154369    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:15.154928    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.155000    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.155000    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.155000    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.157642    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.157847    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.157847    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.157847    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.157847    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.157847    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.157847    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.157847    5100 round_trippers.go:580]     Audit-Id: fe9b308f-e86b-4f3b-bb28-83392d7f2e48
	I0428 18:31:15.158186    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:15.158691    5100 pod_ready.go:97] node "multinode-788600" hosting pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.158973    5100 pod_ready.go:81] duration metric: took 9.3258ms for pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:15.158973    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.158973    5100 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.159057    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-788600
	I0428 18:31:15.159127    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.159127    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.159127    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.171183    5100 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0428 18:31:15.171183    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.171183    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.171183    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.171183    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.171183    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.171183    5100 round_trippers.go:580]     Audit-Id: 9e8d3a67-7fc6-44da-a4ab-4c3bf297d313
	I0428 18:31:15.171183    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.171183    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-788600","namespace":"kube-system","uid":"f87bd4ae-4a5c-4587-a9e8-d381c5b76c63","resourceVersion":"1757","creationTimestamp":"2024-04-29T01:31:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.239.170:2379","kubernetes.io/config.hash":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.mirror":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.seen":"2024-04-29T01:31:06.337700959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:31:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0428 18:31:15.171183    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.172154    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.172154    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.172154    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.174165    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.174603    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.174603    5100 round_trippers.go:580]     Audit-Id: 58fceb9c-2f26-4fda-8c21-03ed3aef01a5
	I0428 18:31:15.174603    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.174603    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.174603    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.174603    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.174603    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.175234    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:15.175376    5100 pod_ready.go:97] node "multinode-788600" hosting pod "etcd-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.175376    5100 pod_ready.go:81] duration metric: took 16.403ms for pod "etcd-multinode-788600" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:15.175376    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "etcd-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.175376    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.175376    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-788600
	I0428 18:31:15.175376    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.175376    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.175376    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.177956    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.178891    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.178891    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.178891    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.178891    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.178891    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.178891    5100 round_trippers.go:580]     Audit-Id: cc23e9ad-96dd-439b-a430-a3c689751251
	I0428 18:31:15.179004    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.179113    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-788600","namespace":"kube-system","uid":"5ade8d95-5387-4444-95af-604116cf695e","resourceVersion":"1754","creationTimestamp":"2024-04-29T01:31:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.239.170:8443","kubernetes.io/config.hash":"e1f1ff8c6e0ecb526bd6baa448e7335e","kubernetes.io/config.mirror":"e1f1ff8c6e0ecb526bd6baa448e7335e","kubernetes.io/config.seen":"2024-04-29T01:31:06.268742128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:31:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0428 18:31:15.179786    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.179786    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.179877    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.179877    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.182704    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.182896    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.182896    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.182896    5100 round_trippers.go:580]     Audit-Id: c3ba53e4-8df9-4d4e-bda5-185d6c10f77f
	I0428 18:31:15.182896    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.182896    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.182896    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.182896    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.182896    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:15.183632    5100 pod_ready.go:97] node "multinode-788600" hosting pod "kube-apiserver-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.183632    5100 pod_ready.go:81] duration metric: took 8.2563ms for pod "kube-apiserver-multinode-788600" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:15.183632    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "kube-apiserver-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.183632    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.183820    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:15.183820    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.183820    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.183820    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.186501    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.186501    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.186939    5100 round_trippers.go:580]     Audit-Id: 99893935-fb21-420c-9cff-c20de7ccb907
	I0428 18:31:15.186939    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.186939    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.186939    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.186939    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.186939    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.187313    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:15.188091    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.188091    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.188091    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.188091    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.190500    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:15.190500    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.190500    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.190500    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.190500    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.190500    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.190500    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.190500    5100 round_trippers.go:580]     Audit-Id: 7f56dd45-7d68-462a-a53e-5a85e89ccc57
	I0428 18:31:15.190500    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:15.191494    5100 pod_ready.go:97] node "multinode-788600" hosting pod "kube-controller-manager-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.191494    5100 pod_ready.go:81] duration metric: took 7.7784ms for pod "kube-controller-manager-multinode-788600" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:15.191494    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "kube-controller-manager-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.191494    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bkkql" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.306676    5100 request.go:629] Waited for 114.7847ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bkkql
	I0428 18:31:15.306676    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bkkql
	I0428 18:31:15.306676    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.306676    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.306676    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.310457    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:15.311284    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.311284    5100 round_trippers.go:580]     Audit-Id: 103130c2-ca49-4b4a-92e6-5d0ccc0d6407
	I0428 18:31:15.311284    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.311284    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.311284    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.311284    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.311284    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.311284    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bkkql","generateName":"kube-proxy-","namespace":"kube-system","uid":"eccd7725-151c-4770-b99c-cb308b31389c","resourceVersion":"1811","creationTimestamp":"2024-04-29T01:09:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0428 18:31:15.508336    5100 request.go:629] Waited for 195.9795ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.508605    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:15.508605    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.508651    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.508667    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.512169    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:15.512169    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.512169    5100 round_trippers.go:580]     Audit-Id: 6961d0a4-358e-4e41-aa67-2f2730d6f3ff
	I0428 18:31:15.512169    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.512169    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.512464    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.512464    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.512464    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.512718    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:15.513623    5100 pod_ready.go:97] node "multinode-788600" hosting pod "kube-proxy-bkkql" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.513623    5100 pod_ready.go:81] duration metric: took 322.1279ms for pod "kube-proxy-bkkql" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:15.513623    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "kube-proxy-bkkql" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:15.513623    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kc8c4" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.696409    5100 request.go:629] Waited for 182.6614ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kc8c4
	I0428 18:31:15.696609    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kc8c4
	I0428 18:31:15.696609    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.696609    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.696609    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.700367    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:15.700367    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.701342    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.701342    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.701342    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.701342    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.701342    5100 round_trippers.go:580]     Audit-Id: 20e27f84-22b7-47b4-a097-76936ffa5a07
	I0428 18:31:15.701342    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.701658    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kc8c4","generateName":"kube-proxy-","namespace":"kube-system","uid":"340b4c9b-449f-4208-846e-dec867826bf7","resourceVersion":"625","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0428 18:31:15.900703    5100 request.go:629] Waited for 198.0923ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:31:15.900822    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:31:15.900822    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:15.900822    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:15.900822    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:15.909119    5100 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0428 18:31:15.909119    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:15.909119    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:15.909119    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:15.909119    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:15 GMT
	I0428 18:31:15.909119    5100 round_trippers.go:580]     Audit-Id: d0c1002e-a1b6-497f-892e-ddd3c4c172ec
	I0428 18:31:15.909119    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:15.909119    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:15.909119    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"1353","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3827 chars]
	I0428 18:31:15.910040    5100 pod_ready.go:92] pod "kube-proxy-kc8c4" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:15.910040    5100 pod_ready.go:81] duration metric: took 396.4162ms for pod "kube-proxy-kc8c4" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:15.910040    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sjsfc" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:16.102000    5100 request.go:629] Waited for 191.7654ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjsfc
	I0428 18:31:16.102255    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjsfc
	I0428 18:31:16.102255    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:16.102255    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:16.102255    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:16.105969    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:16.107006    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:16.107038    5100 round_trippers.go:580]     Audit-Id: 855ecca8-d4e6-430b-aa3c-4558037042ca
	I0428 18:31:16.107038    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:16.107038    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:16.107038    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:16.107038    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:16.107038    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:16 GMT
	I0428 18:31:16.107379    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sjsfc","generateName":"kube-proxy-","namespace":"kube-system","uid":"f06aadb7-e646-4105-af2f-0acc4a8ad174","resourceVersion":"1698","creationTimestamp":"2024-04-29T01:16:25Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:16:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0428 18:31:16.306385    5100 request.go:629] Waited for 198.1483ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m03
	I0428 18:31:16.306425    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m03
	I0428 18:31:16.306425    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:16.306425    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:16.306425    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:16.310172    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:16.311023    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:16.311023    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:16.311023    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:16.311023    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:16.311023    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:16.311023    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:16 GMT
	I0428 18:31:16.311096    5100 round_trippers.go:580]     Audit-Id: 0f268a7f-8c37-4653-86df-96846cc991d3
	I0428 18:31:16.311337    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m03","uid":"d68977ad-af85-4957-85dc-4ad584113d26","resourceVersion":"1709","creationTimestamp":"2024-04-29T01:26:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_26_47_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:26:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0428 18:31:16.311937    5100 pod_ready.go:97] node "multinode-788600-m03" hosting pod "kube-proxy-sjsfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600-m03" has status "Ready":"Unknown"
	I0428 18:31:16.311937    5100 pod_ready.go:81] duration metric: took 401.8965ms for pod "kube-proxy-sjsfc" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:16.311937    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600-m03" hosting pod "kube-proxy-sjsfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600-m03" has status "Ready":"Unknown"
	I0428 18:31:16.311937    5100 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:16.509495    5100 request.go:629] Waited for 197.318ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-788600
	I0428 18:31:16.509644    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-788600
	I0428 18:31:16.509724    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:16.509724    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:16.509762    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:16.512765    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:16.513186    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:16.513186    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:16.513186    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:16.513186    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:16 GMT
	I0428 18:31:16.513186    5100 round_trippers.go:580]     Audit-Id: 43c41b94-99b3-45b3-823c-f7e75c2eefbe
	I0428 18:31:16.513186    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:16.513186    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:16.513458    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-788600","namespace":"kube-system","uid":"55bd2888-a3b6-498a-9352-8b15bcc5e545","resourceVersion":"1769","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"efaaef68fa82e79d9434de065e5d40a6","kubernetes.io/config.mirror":"efaaef68fa82e79d9434de065e5d40a6","kubernetes.io/config.seen":"2024-04-29T01:08:48.885071033Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0428 18:31:16.700515    5100 request.go:629] Waited for 186.1649ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:16.700515    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:16.700515    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:16.700515    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:16.700515    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:16.704023    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:16.705037    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:16.705077    5100 round_trippers.go:580]     Audit-Id: ee4d4b15-72df-4e5c-86f4-5490ccc9a289
	I0428 18:31:16.705077    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:16.705077    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:16.705077    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:16.705077    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:16.705077    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:16 GMT
	I0428 18:31:16.705222    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1727","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0428 18:31:16.705767    5100 pod_ready.go:97] node "multinode-788600" hosting pod "kube-scheduler-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:16.705924    5100 pod_ready.go:81] duration metric: took 393.9853ms for pod "kube-scheduler-multinode-788600" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:16.705924    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600" hosting pod "kube-scheduler-multinode-788600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600" has status "Ready":"False"
	I0428 18:31:16.705924    5100 pod_ready.go:38] duration metric: took 1.566149s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 18:31:16.705924    5100 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 18:31:16.724721    5100 command_runner.go:130] > -16
	I0428 18:31:16.725018    5100 ops.go:34] apiserver oom_adj: -16
	I0428 18:31:16.725018    5100 kubeadm.go:591] duration metric: took 12.4909983s to restartPrimaryControlPlane
	I0428 18:31:16.725018    5100 kubeadm.go:393] duration metric: took 12.5567953s to StartCluster
	I0428 18:31:16.725018    5100 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:16.725018    5100 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 18:31:16.726568    5100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 18:31:16.727966    5100 start.go:234] Will wait 6m0s for node &{Name: IP:172.27.239.170 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0428 18:31:16.727966    5100 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0428 18:31:16.732826    5100 out.go:177] * Verifying Kubernetes components...
	I0428 18:31:16.728603    5100 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:31:16.737476    5100 out.go:177] * Enabled addons: 
	I0428 18:31:16.742152    5100 addons.go:505] duration metric: took 14.1858ms for enable addons: enabled=[]
	I0428 18:31:16.751296    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:31:17.008730    5100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 18:31:17.039776    5100 node_ready.go:35] waiting up to 6m0s for node "multinode-788600" to be "Ready" ...
	I0428 18:31:17.040103    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:17.040103    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.040146    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.040172    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.043764    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:17.043764    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.043764    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.043764    5100 round_trippers.go:580]     Audit-Id: a8273f55-9742-4a3a-93b9-eca47c09292d
	I0428 18:31:17.043764    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.043764    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.043764    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.043764    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.044784    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:17.044784    5100 node_ready.go:49] node "multinode-788600" has status "Ready":"True"
	I0428 18:31:17.044784    5100 node_ready.go:38] duration metric: took 4.9181ms for node "multinode-788600" to be "Ready" ...
	I0428 18:31:17.044784    5100 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 18:31:17.109075    5100 request.go:629] Waited for 64.0491ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:17.109310    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:17.109310    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.109310    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.109310    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.114919    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:17.115371    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.115427    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.115427    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.115427    5100 round_trippers.go:580]     Audit-Id: 006c7d51-eccd-4506-a698-005b0daa1d0b
	I0428 18:31:17.115427    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.115427    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.115427    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.116826    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1817"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87185 chars]
	I0428 18:31:17.120579    5100 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:17.297742    5100 request.go:629] Waited for 177.1623ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:17.297742    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:17.297742    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.297742    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.297742    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.301521    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:17.301521    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.301521    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.301521    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.301521    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.301521    5100 round_trippers.go:580]     Audit-Id: 8e66d5c6-ec9a-4aa3-9b06-d540afe60889
	I0428 18:31:17.301521    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.301521    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.302710    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:17.499470    5100 request.go:629] Waited for 195.8663ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:17.499470    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:17.499470    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.499470    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.499470    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.503650    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:17.503650    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.503650    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.503650    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.503755    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.503755    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.503755    5100 round_trippers.go:580]     Audit-Id: 81db7e77-99aa-4860-9e04-b6ee3d7ee5e6
	I0428 18:31:17.503755    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.504045    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:17.703045    5100 request.go:629] Waited for 78.0265ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:17.703158    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:17.703158    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.703158    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.703158    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.708829    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:17.709368    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.709443    5100 round_trippers.go:580]     Audit-Id: 5590ba60-674b-44c2-82f1-0b5501385170
	I0428 18:31:17.709443    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.709443    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.709443    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.709443    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.709443    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.709717    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:17.907071    5100 request.go:629] Waited for 196.8197ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:17.907260    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:17.907260    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:17.907260    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:17.907260    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:17.912062    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:17.912062    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:17.912062    5100 round_trippers.go:580]     Audit-Id: 7eb648bb-2c0e-4586-8efc-8ed163da53ce
	I0428 18:31:17.912062    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:17.912062    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:17.912062    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:17.912062    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:17.912062    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:17 GMT
	I0428 18:31:17.912062    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:18.125074    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:18.125176    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:18.125176    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:18.125176    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:18.130106    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:18.130391    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:18.130455    5100 round_trippers.go:580]     Audit-Id: 16906fd6-6d66-4bc7-9365-56443fcce4da
	I0428 18:31:18.130455    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:18.130455    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:18.130455    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:18.130455    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:18.130455    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:18 GMT
	I0428 18:31:18.130455    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:18.297115    5100 request.go:629] Waited for 165.6205ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:18.297115    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:18.297115    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:18.297115    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:18.297115    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:18.301050    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:18.301050    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:18.301050    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:18.301050    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:18.301771    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:18.301771    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:18.301771    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:18 GMT
	I0428 18:31:18.301771    5100 round_trippers.go:580]     Audit-Id: 493da01f-28a4-469a-b479-0e5c634dcda6
	I0428 18:31:18.302106    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:18.623750    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:18.623750    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:18.623884    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:18.623884    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:18.627295    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:18.627295    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:18.627295    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:18.628185    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:18 GMT
	I0428 18:31:18.628185    5100 round_trippers.go:580]     Audit-Id: 0158c0f7-3b76-4cc8-88e6-20a75e3a14a6
	I0428 18:31:18.628185    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:18.628185    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:18.628185    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:18.628287    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:18.701220    5100 request.go:629] Waited for 71.8291ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:18.701447    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:18.701447    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:18.701447    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:18.701447    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:18.705727    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:18.706655    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:18.706655    5100 round_trippers.go:580]     Audit-Id: 857798b2-ed0f-4456-ac6b-802e8e992d5a
	I0428 18:31:18.706655    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:18.706716    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:18.706716    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:18.706716    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:18.706716    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:18 GMT
	I0428 18:31:18.707322    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:19.125144    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:19.125458    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:19.125458    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:19.125458    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:19.129851    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:19.129851    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:19.130436    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:19.130436    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:19 GMT
	I0428 18:31:19.130436    5100 round_trippers.go:580]     Audit-Id: e680eb2e-fdde-4f45-8785-96cc96451ae4
	I0428 18:31:19.130436    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:19.130436    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:19.130436    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:19.130645    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:19.131464    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:19.131464    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:19.131464    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:19.131539    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:19.135413    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:19.135592    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:19.135592    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:19.135592    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:19 GMT
	I0428 18:31:19.135592    5100 round_trippers.go:580]     Audit-Id: 94b7ce89-e9f4-4224-84b3-b2a746aed8d9
	I0428 18:31:19.135592    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:19.135592    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:19.135592    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:19.136057    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:19.136636    5100 pod_ready.go:102] pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace has status "Ready":"False"
	I0428 18:31:19.625365    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:19.625365    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:19.625365    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:19.625365    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:19.629585    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:19.630350    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:19.630350    5100 round_trippers.go:580]     Audit-Id: 9a54d882-18e4-412a-95e9-2944c7341b61
	I0428 18:31:19.630350    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:19.630350    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:19.630350    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:19.630350    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:19.630350    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:19 GMT
	I0428 18:31:19.631010    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:19.631732    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:19.631732    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:19.631732    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:19.631732    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:19.634764    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:19.635282    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:19.635282    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:19.635282    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:19.635282    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:19.635282    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:19 GMT
	I0428 18:31:19.635282    5100 round_trippers.go:580]     Audit-Id: ce01b556-8310-4cd0-97b1-00048e3ce5ef
	I0428 18:31:19.635367    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:19.635644    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:20.125337    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:20.125563    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:20.125563    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:20.125563    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:20.130243    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:20.130243    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:20.130340    5100 round_trippers.go:580]     Audit-Id: d66e8822-e755-4521-8c73-cf13c831f445
	I0428 18:31:20.130340    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:20.130340    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:20.130340    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:20.130340    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:20.130340    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:20 GMT
	I0428 18:31:20.130550    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:20.131365    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:20.131365    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:20.131365    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:20.131365    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:20.135405    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:20.135608    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:20.135608    5100 round_trippers.go:580]     Audit-Id: 45cd6471-74c5-4493-b702-d89fd8d35e5d
	I0428 18:31:20.135608    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:20.135608    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:20.135608    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:20.135608    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:20.135608    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:20 GMT
	I0428 18:31:20.136101    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:20.634410    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:20.634488    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:20.634488    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:20.634557    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:20.637052    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:20.637426    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:20.637541    5100 round_trippers.go:580]     Audit-Id: ab177460-eb95-46f5-a35e-f25819254aeb
	I0428 18:31:20.637541    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:20.637541    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:20.637541    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:20.637541    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:20.637541    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:20 GMT
	I0428 18:31:20.637794    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:20.638636    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:20.638636    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:20.638636    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:20.638695    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:20.641492    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:20.641556    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:20.641556    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:20 GMT
	I0428 18:31:20.641556    5100 round_trippers.go:580]     Audit-Id: ed017748-8a58-4062-9bb8-e81c00b3cba6
	I0428 18:31:20.641556    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:20.641624    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:20.641624    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:20.641624    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:20.641935    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:21.127928    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:21.127928    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.127928    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.127928    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.132962    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:21.133357    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.133357    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.133357    5100 round_trippers.go:580]     Audit-Id: 1230feb4-c38f-4839-9a95-4f3d25a63a95
	I0428 18:31:21.133430    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.133430    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.133430    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.133430    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.133643    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1772","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0428 18:31:21.134444    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:21.134444    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.134444    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.134444    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.140109    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:21.140391    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.140391    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.140492    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.140492    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.140492    5100 round_trippers.go:580]     Audit-Id: 0db692a7-5837-417e-8d92-b8c244e93eee
	I0428 18:31:21.140492    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.140492    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.140806    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:21.141367    5100 pod_ready.go:102] pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace has status "Ready":"False"
	I0428 18:31:21.633646    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rp2lx
	I0428 18:31:21.633743    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.633743    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.633743    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.637104    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:21.638230    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.638230    5100 round_trippers.go:580]     Audit-Id: f68ff9c4-1dfd-405f-a796-cc57177a2633
	I0428 18:31:21.638230    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.638230    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.638230    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.638230    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.638230    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.638622    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1831","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0428 18:31:21.639344    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:21.639415    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.639415    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.639415    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.642703    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:21.642882    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.642882    5100 round_trippers.go:580]     Audit-Id: 156045d7-ea62-439c-a5a2-764198fcf8fc
	I0428 18:31:21.642882    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.642882    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.642882    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.642882    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.642882    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.643283    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:21.643782    5100 pod_ready.go:92] pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:21.643853    5100 pod_ready.go:81] duration metric: took 4.5231918s for pod "coredns-7db6d8ff4d-rp2lx" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:21.643853    5100 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:21.644054    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-788600
	I0428 18:31:21.644110    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.644110    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.644110    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.646053    5100 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0428 18:31:21.646894    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.646894    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.646894    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.646894    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.646894    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.646894    5100 round_trippers.go:580]     Audit-Id: f35af6a5-cb54-4f3a-a859-d4268c14877e
	I0428 18:31:21.646894    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.647187    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-788600","namespace":"kube-system","uid":"f87bd4ae-4a5c-4587-a9e8-d381c5b76c63","resourceVersion":"1828","creationTimestamp":"2024-04-29T01:31:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.239.170:2379","kubernetes.io/config.hash":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.mirror":"876d6f7cff87a27bed899cda339578e9","kubernetes.io/config.seen":"2024-04-29T01:31:06.337700959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:31:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0428 18:31:21.647739    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:21.647739    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.647739    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.647739    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.650311    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:21.650311    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.650311    5100 round_trippers.go:580]     Audit-Id: 2caa71c7-c1b8-47dc-9700-df9b0410bb56
	I0428 18:31:21.650311    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.650311    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.650502    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.650502    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.650502    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.650685    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:21.650685    5100 pod_ready.go:92] pod "etcd-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:21.650685    5100 pod_ready.go:81] duration metric: took 6.8321ms for pod "etcd-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:21.650685    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:21.710990    5100 request.go:629] Waited for 60.172ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-788600
	I0428 18:31:21.711066    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-788600
	I0428 18:31:21.711066    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.711066    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.711066    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.714561    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:21.714561    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.715037    5100 round_trippers.go:580]     Audit-Id: fc8e88b9-66f9-4898-9ff1-4315cda3ab66
	I0428 18:31:21.715037    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.715037    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.715037    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.715037    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.715037    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.715299    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-788600","namespace":"kube-system","uid":"5ade8d95-5387-4444-95af-604116cf695e","resourceVersion":"1819","creationTimestamp":"2024-04-29T01:31:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.239.170:8443","kubernetes.io/config.hash":"e1f1ff8c6e0ecb526bd6baa448e7335e","kubernetes.io/config.mirror":"e1f1ff8c6e0ecb526bd6baa448e7335e","kubernetes.io/config.seen":"2024-04-29T01:31:06.268742128Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:31:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0428 18:31:21.897294    5100 request.go:629] Waited for 181.1138ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:21.897451    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:21.897451    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:21.897451    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:21.897451    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:21.902008    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:21.902330    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:21.902330    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:21 GMT
	I0428 18:31:21.902405    5100 round_trippers.go:580]     Audit-Id: 455e52d6-9783-4cd0-ba22-d7ced6bdbde5
	I0428 18:31:21.902474    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:21.902513    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:21.902513    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:21.902563    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:21.902731    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:21.903336    5100 pod_ready.go:92] pod "kube-apiserver-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:21.903336    5100 pod_ready.go:81] duration metric: took 252.6502ms for pod "kube-apiserver-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:21.903390    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:22.101010    5100 request.go:629] Waited for 197.3159ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:22.101123    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:22.101329    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:22.101329    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:22.101329    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:22.105803    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:22.105803    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:22.105803    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:22.105803    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:22 GMT
	I0428 18:31:22.106752    5100 round_trippers.go:580]     Audit-Id: 2718f490-3370-4fab-81d1-075ce51d9a4b
	I0428 18:31:22.106752    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:22.106752    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:22.106752    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:22.107214    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:22.302267    5100 request.go:629] Waited for 194.1916ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:22.302870    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:22.302870    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:22.302870    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:22.302870    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:22.306443    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:22.307139    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:22.307139    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:22 GMT
	I0428 18:31:22.307139    5100 round_trippers.go:580]     Audit-Id: 1859c6bd-dd6f-46f3-8023-86dfbf522bb5
	I0428 18:31:22.307139    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:22.307139    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:22.307139    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:22.307139    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:22.307433    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:22.503587    5100 request.go:629] Waited for 93.5627ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:22.503911    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:22.503911    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:22.503911    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:22.503911    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:22.508599    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:22.508599    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:22.508599    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:22.508599    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:22.508599    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:22.508599    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:22 GMT
	I0428 18:31:22.508599    5100 round_trippers.go:580]     Audit-Id: 69d38582-07ce-450b-9982-677772a19f0f
	I0428 18:31:22.508599    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:22.508599    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:22.705855    5100 request.go:629] Waited for 196.1165ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:22.706020    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:22.706020    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:22.706020    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:22.706020    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:22.710776    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:22.710776    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:22.710776    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:22 GMT
	I0428 18:31:22.710776    5100 round_trippers.go:580]     Audit-Id: b319da81-14dd-4a76-b77b-5cad9a9f0cdd
	I0428 18:31:22.710776    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:22.710776    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:22.710776    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:22.710776    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:22.711099    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:22.909509    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:22.909509    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:22.909509    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:22.909509    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:22.913239    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:22.913239    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:22.914065    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:22 GMT
	I0428 18:31:22.914065    5100 round_trippers.go:580]     Audit-Id: 90797ee4-eb66-443a-bee2-91e3160ae5a3
	I0428 18:31:22.914065    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:22.914152    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:22.914197    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:22.914197    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:22.914394    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:23.096951    5100 request.go:629] Waited for 181.6718ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:23.097189    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:23.097189    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:23.097189    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:23.097189    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:23.103361    5100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:31:23.103791    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:23.103791    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:23.103791    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:23.103791    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:23.103791    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:23 GMT
	I0428 18:31:23.103791    5100 round_trippers.go:580]     Audit-Id: 195353fb-71f8-4541-826a-8108aaac1962
	I0428 18:31:23.103791    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:23.104000    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:23.410524    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:23.410524    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:23.410524    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:23.410524    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:23.418485    5100 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0428 18:31:23.418637    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:23.418637    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:23.418637    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:23.418637    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:23.418637    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:23.418637    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:23 GMT
	I0428 18:31:23.418723    5100 round_trippers.go:580]     Audit-Id: a9965ad6-304f-4265-b0f7-4574d439bc5e
	I0428 18:31:23.418987    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:23.504617    5100 request.go:629] Waited for 84.6283ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:23.504868    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:23.504868    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:23.504908    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:23.504908    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:23.512339    5100 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0428 18:31:23.512339    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:23.512339    5100 round_trippers.go:580]     Audit-Id: 44c42d96-1347-4a1d-bb98-6efab260b0a9
	I0428 18:31:23.512339    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:23.512339    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:23.512339    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:23.512339    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:23.512339    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:23 GMT
	I0428 18:31:23.512948    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:23.912694    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:23.912694    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:23.912694    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:23.912694    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:23.916280    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:23.917051    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:23.917051    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:23.917051    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:23.917051    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:23 GMT
	I0428 18:31:23.917169    5100 round_trippers.go:580]     Audit-Id: e56e8589-fd0b-4a10-8978-88a5498adf87
	I0428 18:31:23.917169    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:23.917255    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:23.917386    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:23.918466    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:23.918466    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:23.918466    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:23.918545    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:23.920990    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:23.920990    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:23.921364    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:23.921364    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:23.921364    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:23.921364    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:23.921364    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:23 GMT
	I0428 18:31:23.921364    5100 round_trippers.go:580]     Audit-Id: 459d79fa-7fd5-458c-b59b-4aa09ca2d11f
	I0428 18:31:23.921619    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:23.921844    5100 pod_ready.go:102] pod "kube-controller-manager-multinode-788600" in "kube-system" namespace has status "Ready":"False"
	I0428 18:31:24.403813    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:24.403813    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:24.403898    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:24.403898    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:24.407347    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:24.407347    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:24.407880    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:24.407880    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:24.407880    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:24.407880    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:24 GMT
	I0428 18:31:24.407880    5100 round_trippers.go:580]     Audit-Id: 6c51501b-33a9-4f17-83a5-0d289e64f234
	I0428 18:31:24.407880    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:24.408280    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:24.409107    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:24.409107    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:24.409107    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:24.409107    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:24.418873    5100 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0428 18:31:24.418999    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:24.418999    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:24.418999    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:24.418999    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:24.418999    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:24 GMT
	I0428 18:31:24.418999    5100 round_trippers.go:580]     Audit-Id: c65fc721-9bdd-425f-884a-ac4fc9762dac
	I0428 18:31:24.418999    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:24.418999    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:24.907990    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:24.907990    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:24.907990    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:24.907990    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:24.911050    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:24.911818    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:24.911818    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:24.911818    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:24.911818    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:24 GMT
	I0428 18:31:24.911818    5100 round_trippers.go:580]     Audit-Id: b319d2c2-62a5-4196-b683-3941c10aa59c
	I0428 18:31:24.911818    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:24.911818    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:24.912137    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:24.912842    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:24.912842    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:24.912842    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:24.912842    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:24.915423    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:24.915423    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:24.915997    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:24 GMT
	I0428 18:31:24.915997    5100 round_trippers.go:580]     Audit-Id: 84f5841e-e7ee-45e3-a703-0f959c7f358a
	I0428 18:31:24.915997    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:24.915997    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:24.915997    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:24.915997    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:24.916211    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:25.406479    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:25.406479    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:25.406479    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:25.406479    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:25.410068    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:25.411003    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:25.411003    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:25.411003    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:25 GMT
	I0428 18:31:25.411003    5100 round_trippers.go:580]     Audit-Id: 65af6da8-cf58-4415-9bd1-78eb11064ed9
	I0428 18:31:25.411003    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:25.411003    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:25.411085    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:25.411437    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:25.412086    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:25.412086    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:25.412086    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:25.412086    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:25.416108    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:25.416108    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:25.416108    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:25.416108    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:25.416108    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:25.416108    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:25.416349    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:25 GMT
	I0428 18:31:25.416349    5100 round_trippers.go:580]     Audit-Id: 85a041c9-f007-4e8d-a7e5-2d480a07a6f2
	I0428 18:31:25.416451    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:25.905969    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:25.906041    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:25.906041    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:25.906041    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:25.910420    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:25.910753    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:25.910753    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:25.910753    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:25 GMT
	I0428 18:31:25.910753    5100 round_trippers.go:580]     Audit-Id: f26b3776-3168-481a-a906-dc87ef8303f5
	I0428 18:31:25.910753    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:25.910753    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:25.910753    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:25.911278    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:25.912093    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:25.912171    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:25.912243    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:25.912280    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:25.916509    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:25.916564    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:25.916564    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:25.916606    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:25.916606    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:25.916606    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:25 GMT
	I0428 18:31:25.916606    5100 round_trippers.go:580]     Audit-Id: 4e2b371e-42dc-4d12-9f9d-0c0566f49f31
	I0428 18:31:25.916652    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:25.917158    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:26.406983    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:26.407082    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.407082    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.407082    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.411527    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:26.411527    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.412073    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.412073    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.412073    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.412073    5100 round_trippers.go:580]     Audit-Id: b9d642b0-29ca-47a0-af35-12fa93ac8141
	I0428 18:31:26.412073    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.412073    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.412518    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1749","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0428 18:31:26.413377    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:26.413377    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.413469    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.413469    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.416937    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:26.416937    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.417604    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.417604    5100 round_trippers.go:580]     Audit-Id: 5bde9dee-4272-4b16-9ef7-cef4f1306ca7
	I0428 18:31:26.417604    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.417604    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.417604    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.417604    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.417907    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:26.418633    5100 pod_ready.go:102] pod "kube-controller-manager-multinode-788600" in "kube-system" namespace has status "Ready":"False"
	I0428 18:31:26.910803    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-788600
	I0428 18:31:26.910803    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.910803    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.910803    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.914461    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:26.915082    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.915082    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.915082    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.915082    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.915082    5100 round_trippers.go:580]     Audit-Id: cf8512b6-0c9a-49e4-b462-11a9c7c0186e
	I0428 18:31:26.915082    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.915082    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.915465    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-788600","namespace":"kube-system","uid":"b7d7893e-bd95-4f96-879f-a8378040fc03","resourceVersion":"1845","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.mirror":"4239cb695125cadb48fe20f9f8ad165e","kubernetes.io/config.seen":"2024-04-29T01:08:48.885069833Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0428 18:31:26.916199    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:26.916253    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.916253    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.916253    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.919831    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:26.919831    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.919831    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.919831    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.919831    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.919831    5100 round_trippers.go:580]     Audit-Id: eb964c52-b7a1-4dce-84d1-d5ced6289e32
	I0428 18:31:26.919831    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.919831    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.919831    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:26.920847    5100 pod_ready.go:92] pod "kube-controller-manager-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:26.920847    5100 pod_ready.go:81] duration metric: took 5.0174446s for pod "kube-controller-manager-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:26.920847    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bkkql" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:26.920847    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bkkql
	I0428 18:31:26.920847    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.920847    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.920847    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.923862    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:26.923991    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.923991    5100 round_trippers.go:580]     Audit-Id: 34326d60-61eb-4e29-9e55-3265edff4448
	I0428 18:31:26.923991    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.923991    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.923991    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.923991    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.923991    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.924328    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bkkql","generateName":"kube-proxy-","namespace":"kube-system","uid":"eccd7725-151c-4770-b99c-cb308b31389c","resourceVersion":"1811","creationTimestamp":"2024-04-29T01:09:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0428 18:31:26.925059    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:26.925157    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.925157    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.925157    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.929745    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:26.930529    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.930529    5100 round_trippers.go:580]     Audit-Id: 21fcb88b-b68a-4e51-b75f-79f6bbbc4901
	I0428 18:31:26.930529    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.930529    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.930529    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.930529    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.930529    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.930529    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:26.930529    5100 pod_ready.go:92] pod "kube-proxy-bkkql" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:26.930529    5100 pod_ready.go:81] duration metric: took 9.6822ms for pod "kube-proxy-bkkql" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:26.930529    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kc8c4" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:26.930529    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kc8c4
	I0428 18:31:26.930529    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:26.930529    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:26.930529    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:26.933549    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:26.933549    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:26.933549    5100 round_trippers.go:580]     Audit-Id: bafcc134-e6f0-426a-a801-c20dfa8ae175
	I0428 18:31:26.933549    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:26.933549    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:26.933549    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:26.933549    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:26.933549    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:26 GMT
	I0428 18:31:26.933549    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kc8c4","generateName":"kube-proxy-","namespace":"kube-system","uid":"340b4c9b-449f-4208-846e-dec867826bf7","resourceVersion":"625","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0428 18:31:27.098538    5100 request.go:629] Waited for 163.8061ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:31:27.098710    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m02
	I0428 18:31:27.098710    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.098710    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.098710    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.102441    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:27.102441    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.103445    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.103445    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.103445    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.103445    5100 round_trippers.go:580]     Audit-Id: fb5898ce-a6b8-4a4a-b6d5-31ad26eecf80
	I0428 18:31:27.103445    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.103520    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.105457    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m02","uid":"41c98ac8-1ed8-4900-8cc4-827ec42add9b","resourceVersion":"1353","creationTimestamp":"2024-04-29T01:11:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_11_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:11:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3827 chars]
	I0428 18:31:27.105457    5100 pod_ready.go:92] pod "kube-proxy-kc8c4" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:27.105457    5100 pod_ready.go:81] duration metric: took 174.9279ms for pod "kube-proxy-kc8c4" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:27.105457    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sjsfc" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:27.301745    5100 request.go:629] Waited for 195.5395ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjsfc
	I0428 18:31:27.302056    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjsfc
	I0428 18:31:27.302056    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.302056    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.302056    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.307781    5100 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 18:31:27.307860    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.307860    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.307860    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.307926    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.307926    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.307952    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.307952    5100 round_trippers.go:580]     Audit-Id: 7efa1919-f143-4c8f-b032-2b86afdfc5a3
	I0428 18:31:27.307981    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sjsfc","generateName":"kube-proxy-","namespace":"kube-system","uid":"f06aadb7-e646-4105-af2f-0acc4a8ad174","resourceVersion":"1698","creationTimestamp":"2024-04-29T01:16:25Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"14e08ec6-60e5-4983-988c-cf6bb74cee3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:16:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14e08ec6-60e5-4983-988c-cf6bb74cee3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0428 18:31:27.502902    5100 request.go:629] Waited for 193.7858ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m03
	I0428 18:31:27.503060    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600-m03
	I0428 18:31:27.503060    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.503060    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.503060    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.506683    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:27.507255    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.507255    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.507255    5100 round_trippers.go:580]     Audit-Id: 9d27db1a-1bf1-43d7-9ff4-dca89bead646
	I0428 18:31:27.507255    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.507255    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.507255    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.507255    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.507493    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600-m03","uid":"d68977ad-af85-4957-85dc-4ad584113d26","resourceVersion":"1842","creationTimestamp":"2024-04-29T01:26:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_28T18_26_47_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:26:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0428 18:31:27.508040    5100 pod_ready.go:97] node "multinode-788600-m03" hosting pod "kube-proxy-sjsfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600-m03" has status "Ready":"Unknown"
	I0428 18:31:27.508183    5100 pod_ready.go:81] duration metric: took 402.6814ms for pod "kube-proxy-sjsfc" in "kube-system" namespace to be "Ready" ...
	E0428 18:31:27.508199    5100 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-788600-m03" hosting pod "kube-proxy-sjsfc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-788600-m03" has status "Ready":"Unknown"
	I0428 18:31:27.508199    5100 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:27.706822    5100 request.go:629] Waited for 198.3375ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-788600
	I0428 18:31:27.707038    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-788600
	I0428 18:31:27.707038    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.707038    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.707038    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.710618    5100 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 18:31:27.710618    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.710618    5100 round_trippers.go:580]     Audit-Id: 346dffd5-6ed0-444b-982a-bdfbd2984a5d
	I0428 18:31:27.710785    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.710785    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.710785    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.710785    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.710785    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.710965    5100 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-788600","namespace":"kube-system","uid":"55bd2888-a3b6-498a-9352-8b15bcc5e545","resourceVersion":"1834","creationTimestamp":"2024-04-29T01:08:49Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"efaaef68fa82e79d9434de065e5d40a6","kubernetes.io/config.mirror":"efaaef68fa82e79d9434de065e5d40a6","kubernetes.io/config.seen":"2024-04-29T01:08:48.885071033Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:08:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0428 18:31:27.909797    5100 request.go:629] Waited for 197.525ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:27.910028    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes/multinode-788600
	I0428 18:31:27.910109    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.910109    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.910109    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.914589    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:27.914589    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.914589    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.914589    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.914589    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.914668    5100 round_trippers.go:580]     Audit-Id: 9b5cf9aa-ca13-4191-8718-7bcc2058694f
	I0428 18:31:27.914668    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.914668    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.914843    5100 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T01:08:45Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0428 18:31:27.915494    5100 pod_ready.go:92] pod "kube-scheduler-multinode-788600" in "kube-system" namespace has status "Ready":"True"
	I0428 18:31:27.915494    5100 pod_ready.go:81] duration metric: took 407.2947ms for pod "kube-scheduler-multinode-788600" in "kube-system" namespace to be "Ready" ...
	I0428 18:31:27.915494    5100 pod_ready.go:38] duration metric: took 10.8706849s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 18:31:27.915494    5100 api_server.go:52] waiting for apiserver process to appear ...
	I0428 18:31:27.928493    5100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:31:27.958046    5100 command_runner.go:130] > 1873
	I0428 18:31:27.958165    5100 api_server.go:72] duration metric: took 11.2301726s to wait for apiserver process to appear ...
	I0428 18:31:27.958165    5100 api_server.go:88] waiting for apiserver healthz status ...
	I0428 18:31:27.958239    5100 api_server.go:253] Checking apiserver healthz at https://172.27.239.170:8443/healthz ...
	I0428 18:31:27.966618    5100 api_server.go:279] https://172.27.239.170:8443/healthz returned 200:
	ok
	I0428 18:31:27.967716    5100 round_trippers.go:463] GET https://172.27.239.170:8443/version
	I0428 18:31:27.967756    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:27.967798    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:27.967798    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:27.970713    5100 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 18:31:27.970929    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:27.970929    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:27.970929    5100 round_trippers.go:580]     Content-Length: 263
	I0428 18:31:27.970929    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:27 GMT
	I0428 18:31:27.970929    5100 round_trippers.go:580]     Audit-Id: d08eef7e-51d9-480d-801f-83d53e5365c3
	I0428 18:31:27.970929    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:27.970929    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:27.971026    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:27.971026    5100 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0428 18:31:27.971163    5100 api_server.go:141] control plane version: v1.30.0
	I0428 18:31:27.971195    5100 api_server.go:131] duration metric: took 12.9561ms to wait for apiserver health ...
	I0428 18:31:27.971195    5100 system_pods.go:43] waiting for kube-system pods to appear ...
	I0428 18:31:28.110224    5100 request.go:629] Waited for 138.7183ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:28.110224    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:28.110224    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:28.110224    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:28.110224    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:28.117002    5100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:31:28.117293    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:28.117293    5100 round_trippers.go:580]     Audit-Id: 35f8cbc1-51d6-4b4a-b6c5-4c6af5816f17
	I0428 18:31:28.117293    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:28.117293    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:28.117293    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:28.117293    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:28.117293    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:28 GMT
	I0428 18:31:28.118618    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1845"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1831","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86158 chars]
	I0428 18:31:28.122837    5100 system_pods.go:59] 12 kube-system pods found
	I0428 18:31:28.122837    5100 system_pods.go:61] "coredns-7db6d8ff4d-rp2lx" [d6f6f38d-f1f3-454e-a469-c76c8fbc5d99] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "etcd-multinode-788600" [f87bd4ae-4a5c-4587-a9e8-d381c5b76c63] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kindnet-52rrh" [49c6b5f0-286f-4bff-b719-d73a4ea4aaf3] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kindnet-hnvm4" [d01265be-d3ee-47dc-9d72-fd68a6a6eacd] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kindnet-ms872" [9dffcd3e-2cc0-414f-a465-fe37b80ad4bc] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-apiserver-multinode-788600" [5ade8d95-5387-4444-95af-604116cf695e] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-controller-manager-multinode-788600" [b7d7893e-bd95-4f96-879f-a8378040fc03] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-proxy-bkkql" [eccd7725-151c-4770-b99c-cb308b31389c] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-proxy-kc8c4" [340b4c9b-449f-4208-846e-dec867826bf7] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-proxy-sjsfc" [f06aadb7-e646-4105-af2f-0acc4a8ad174] Running
	I0428 18:31:28.122937    5100 system_pods.go:61] "kube-scheduler-multinode-788600" [55bd2888-a3b6-498a-9352-8b15bcc5e545] Running
	I0428 18:31:28.123014    5100 system_pods.go:61] "storage-provisioner" [04bc447a-c711-4c23-ad4b-db5fd32b28d2] Running
	I0428 18:31:28.123089    5100 system_pods.go:74] duration metric: took 151.8941ms to wait for pod list to return data ...
	I0428 18:31:28.123142    5100 default_sa.go:34] waiting for default service account to be created ...
	I0428 18:31:28.311814    5100 request.go:629] Waited for 188.3166ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/default/serviceaccounts
	I0428 18:31:28.311814    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/default/serviceaccounts
	I0428 18:31:28.311814    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:28.311814    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:28.311814    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:28.316444    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:28.317105    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:28.317105    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:28.317105    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:28.317204    5100 round_trippers.go:580]     Content-Length: 262
	I0428 18:31:28.317204    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:28 GMT
	I0428 18:31:28.317204    5100 round_trippers.go:580]     Audit-Id: cd65f6c5-26c4-4ad7-aba0-8dea016a8f55
	I0428 18:31:28.317204    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:28.317204    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:28.317204    5100 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1845"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"cd75ac33-a0a3-4b71-9266-aa10ab97a649","resourceVersion":"328","creationTimestamp":"2024-04-29T01:09:02Z"}}]}
	I0428 18:31:28.317550    5100 default_sa.go:45] found service account: "default"
	I0428 18:31:28.317550    5100 default_sa.go:55] duration metric: took 194.4066ms for default service account to be created ...
	I0428 18:31:28.317659    5100 system_pods.go:116] waiting for k8s-apps to be running ...
	I0428 18:31:28.498845    5100 request.go:629] Waited for 181.1371ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:28.499029    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/namespaces/kube-system/pods
	I0428 18:31:28.499029    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:28.499029    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:28.499029    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:28.505707    5100 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 18:31:28.505707    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:28.505707    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:28.506263    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:28 GMT
	I0428 18:31:28.506263    5100 round_trippers.go:580]     Audit-Id: aa46fee1-69c6-4bcc-a38e-ab3ddbb26b03
	I0428 18:31:28.506263    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:28.506263    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:28.506263    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:28.507406    5100 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1845"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-rp2lx","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d6f6f38d-f1f3-454e-a469-c76c8fbc5d99","resourceVersion":"1831","creationTimestamp":"2024-04-29T01:09:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"adf2f86a-41f2-4157-8a7a-43bd16c155b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T01:09:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adf2f86a-41f2-4157-8a7a-43bd16c155b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86158 chars]
	I0428 18:31:28.512076    5100 system_pods.go:86] 12 kube-system pods found
	I0428 18:31:28.512215    5100 system_pods.go:89] "coredns-7db6d8ff4d-rp2lx" [d6f6f38d-f1f3-454e-a469-c76c8fbc5d99] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "etcd-multinode-788600" [f87bd4ae-4a5c-4587-a9e8-d381c5b76c63] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kindnet-52rrh" [49c6b5f0-286f-4bff-b719-d73a4ea4aaf3] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kindnet-hnvm4" [d01265be-d3ee-47dc-9d72-fd68a6a6eacd] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kindnet-ms872" [9dffcd3e-2cc0-414f-a465-fe37b80ad4bc] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-apiserver-multinode-788600" [5ade8d95-5387-4444-95af-604116cf695e] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-controller-manager-multinode-788600" [b7d7893e-bd95-4f96-879f-a8378040fc03] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-proxy-bkkql" [eccd7725-151c-4770-b99c-cb308b31389c] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-proxy-kc8c4" [340b4c9b-449f-4208-846e-dec867826bf7] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-proxy-sjsfc" [f06aadb7-e646-4105-af2f-0acc4a8ad174] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "kube-scheduler-multinode-788600" [55bd2888-a3b6-498a-9352-8b15bcc5e545] Running
	I0428 18:31:28.512215    5100 system_pods.go:89] "storage-provisioner" [04bc447a-c711-4c23-ad4b-db5fd32b28d2] Running
	I0428 18:31:28.512215    5100 system_pods.go:126] duration metric: took 194.5554ms to wait for k8s-apps to be running ...
	I0428 18:31:28.512215    5100 system_svc.go:44] waiting for kubelet service to be running ....
	I0428 18:31:28.523596    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 18:31:28.548090    5100 system_svc.go:56] duration metric: took 35.8758ms WaitForService to wait for kubelet
	I0428 18:31:28.548090    5100 kubeadm.go:576] duration metric: took 11.8200968s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 18:31:28.548090    5100 node_conditions.go:102] verifying NodePressure condition ...
	I0428 18:31:28.702139    5100 request.go:629] Waited for 153.8724ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.239.170:8443/api/v1/nodes
	I0428 18:31:28.702342    5100 round_trippers.go:463] GET https://172.27.239.170:8443/api/v1/nodes
	I0428 18:31:28.702342    5100 round_trippers.go:469] Request Headers:
	I0428 18:31:28.702342    5100 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0428 18:31:28.702342    5100 round_trippers.go:473]     Accept: application/json, */*
	I0428 18:31:28.707188    5100 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 18:31:28.707350    5100 round_trippers.go:577] Response Headers:
	I0428 18:31:28.707350    5100 round_trippers.go:580]     Date: Mon, 29 Apr 2024 01:31:28 GMT
	I0428 18:31:28.707350    5100 round_trippers.go:580]     Audit-Id: acdc7926-627b-4787-8c23-2d4f5214c459
	I0428 18:31:28.707350    5100 round_trippers.go:580]     Cache-Control: no-cache, private
	I0428 18:31:28.707350    5100 round_trippers.go:580]     Content-Type: application/json
	I0428 18:31:28.707350    5100 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ec634b0a-a4b3-4c2a-b4b9-edeb8336b697
	I0428 18:31:28.707350    5100 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 80dedd95-d062-4374-8eca-37a57949c81f
	I0428 18:31:28.707958    5100 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1845"},"items":[{"metadata":{"name":"multinode-788600","uid":"898c5667-1d80-4308-8237-76fdc5797c91","resourceVersion":"1817","creationTimestamp":"2024-04-29T01:08:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-788600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5aea53309587d5dad960702a78dfdd5fb48b1328","minikube.k8s.io/name":"multinode-788600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_28T18_08_50_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15503 chars]
	I0428 18:31:28.709032    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:28.709032    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:28.709032    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:28.709119    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:28.709119    5100 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 18:31:28.709119    5100 node_conditions.go:123] node cpu capacity is 2
	I0428 18:31:28.709119    5100 node_conditions.go:105] duration metric: took 161.0283ms to run NodePressure ...
	I0428 18:31:28.709119    5100 start.go:240] waiting for startup goroutines ...
	I0428 18:31:28.709180    5100 start.go:245] waiting for cluster config update ...
	I0428 18:31:28.709206    5100 start.go:254] writing updated cluster config ...
	I0428 18:31:28.713635    5100 out.go:177] 
	I0428 18:31:28.728535    5100 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:31:28.729592    5100 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:31:28.736674    5100 out.go:177] * Starting "multinode-788600-m02" worker node in "multinode-788600" cluster
	I0428 18:31:28.739063    5100 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 18:31:28.739063    5100 cache.go:56] Caching tarball of preloaded images
	I0428 18:31:28.739414    5100 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0428 18:31:28.739414    5100 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 18:31:28.739414    5100 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:31:28.741647    5100 start.go:360] acquireMachinesLock for multinode-788600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 18:31:28.742058    5100 start.go:364] duration metric: took 410.2µs to acquireMachinesLock for "multinode-788600-m02"
	I0428 18:31:28.742202    5100 start.go:96] Skipping create...Using existing machine configuration
	I0428 18:31:28.742240    5100 fix.go:54] fixHost starting: m02
	I0428 18:31:28.742706    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:30.731719    5100 main.go:141] libmachine: [stdout =====>] : Off
	
	I0428 18:31:30.731719    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:30.731719    5100 fix.go:112] recreateIfNeeded on multinode-788600-m02: state=Stopped err=<nil>
	W0428 18:31:30.731719    5100 fix.go:138] unexpected machine state, will restart: <nil>
	I0428 18:31:30.737932    5100 out.go:177] * Restarting existing hyperv VM for "multinode-788600-m02" ...
	I0428 18:31:30.740224    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-788600-m02
	I0428 18:31:33.744619    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:31:33.744865    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:33.744865    5100 main.go:141] libmachine: Waiting for host to start...
	I0428 18:31:33.744865    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:35.872684    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:31:35.872684    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:35.872684    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:31:38.345518    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:31:38.345783    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:39.349110    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:41.478789    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:31:41.478985    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:41.478985    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:31:43.966341    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:31:43.967262    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:44.974390    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:47.102289    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:31:47.102289    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:47.102510    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:31:49.538127    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:31:49.538127    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:50.538957    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:52.650250    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:31:52.650250    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:52.650250    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:31:55.084780    5100 main.go:141] libmachine: [stdout =====>] : 
	I0428 18:31:55.084780    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:56.086813    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:31:58.209363    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:31:58.210203    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:31:58.210203    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:00.710459    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:00.710539    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:00.713463    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:02.772748    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:02.772748    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:02.773382    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:05.249675    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:05.249675    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:05.250138    5100 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-788600\config.json ...
	I0428 18:32:05.252945    5100 machine.go:94] provisionDockerMachine start ...
	I0428 18:32:05.253070    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:07.311282    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:07.311648    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:07.311648    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:09.851540    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:09.851968    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:09.857517    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:09.858234    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:09.858234    5100 main.go:141] libmachine: About to run SSH command:
	hostname
	I0428 18:32:09.987588    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0428 18:32:09.987588    5100 buildroot.go:166] provisioning hostname "multinode-788600-m02"
	I0428 18:32:09.987674    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:12.009811    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:12.009993    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:12.010120    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:14.460526    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:14.460526    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:14.466292    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:14.466996    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:14.466996    5100 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-788600-m02 && echo "multinode-788600-m02" | sudo tee /etc/hostname
	I0428 18:32:14.614945    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-788600-m02
	
	I0428 18:32:14.614945    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:16.646763    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:16.647833    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:16.647952    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:19.130150    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:19.130150    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:19.135386    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:19.135386    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:19.135912    5100 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-788600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-788600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-788600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 18:32:19.269802    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 18:32:19.269875    5100 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0428 18:32:19.269931    5100 buildroot.go:174] setting up certificates
	I0428 18:32:19.269976    5100 provision.go:84] configureAuth start
	I0428 18:32:19.269976    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:21.299985    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:21.299985    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:21.300532    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:23.785896    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:23.785896    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:23.786564    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:25.835274    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:25.835274    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:25.835486    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:28.326513    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:28.327140    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:28.327140    5100 provision.go:143] copyHostCerts
	I0428 18:32:28.327140    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0428 18:32:28.327140    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0428 18:32:28.327140    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0428 18:32:28.328102    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0428 18:32:28.329575    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0428 18:32:28.330124    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0428 18:32:28.330215    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0428 18:32:28.330287    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0428 18:32:28.331583    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0428 18:32:28.331858    5100 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0428 18:32:28.331858    5100 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0428 18:32:28.332639    5100 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0428 18:32:28.333443    5100 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-788600-m02 san=[127.0.0.1 172.27.237.37 localhost minikube multinode-788600-m02]
	I0428 18:32:28.497786    5100 provision.go:177] copyRemoteCerts
	I0428 18:32:28.511364    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 18:32:28.511364    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:30.560256    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:30.560712    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:30.560991    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:33.031720    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:33.032061    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:33.032170    5100 sshutil.go:53] new ssh client: &{IP:172.27.237.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:32:33.145316    5100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.633862s)
	I0428 18:32:33.145411    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0428 18:32:33.145872    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 18:32:33.198469    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0428 18:32:33.199250    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0428 18:32:33.249609    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0428 18:32:33.250115    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 18:32:33.312741    5100 provision.go:87] duration metric: took 14.0427318s to configureAuth
	I0428 18:32:33.312897    5100 buildroot.go:189] setting minikube options for container-runtime
	I0428 18:32:33.313841    5100 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:32:33.314007    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:35.314823    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:35.314823    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:35.314823    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:37.773454    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:37.773454    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:37.780545    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:37.780621    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:37.780621    5100 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0428 18:32:37.911382    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0428 18:32:37.911479    5100 buildroot.go:70] root file system type: tmpfs
	I0428 18:32:37.911733    5100 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0428 18:32:37.911733    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:40.022110    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:40.022110    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:40.022221    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:42.596109    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:42.596981    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:42.603492    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:42.603492    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:42.604065    5100 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.239.170"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0428 18:32:42.759890    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.239.170
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0428 18:32:42.759890    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:44.747073    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:44.747511    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:44.747593    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:47.181908    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:47.181908    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:47.188297    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:47.188827    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:47.188827    5100 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0428 18:32:49.529003    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0428 18:32:49.529584    5100 machine.go:97] duration metric: took 44.2765326s to provisionDockerMachine
	I0428 18:32:49.529584    5100 start.go:293] postStartSetup for "multinode-788600-m02" (driver="hyperv")
	I0428 18:32:49.529584    5100 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 18:32:49.541764    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 18:32:49.541764    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:51.576610    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:51.576610    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:51.576610    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:54.060378    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:54.060378    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:54.060776    5100 sshutil.go:53] new ssh client: &{IP:172.27.237.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:32:54.169892    5100 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.628053s)
	I0428 18:32:54.184389    5100 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 18:32:54.190850    5100 command_runner.go:130] > NAME=Buildroot
	I0428 18:32:54.190850    5100 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0428 18:32:54.190850    5100 command_runner.go:130] > ID=buildroot
	I0428 18:32:54.190850    5100 command_runner.go:130] > VERSION_ID=2023.02.9
	I0428 18:32:54.190850    5100 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0428 18:32:54.191950    5100 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 18:32:54.192074    5100 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0428 18:32:54.192496    5100 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0428 18:32:54.193473    5100 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> 32282.pem in /etc/ssl/certs
	I0428 18:32:54.193473    5100 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem -> /etc/ssl/certs/32282.pem
	I0428 18:32:54.208684    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 18:32:54.228930    5100 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\32282.pem --> /etc/ssl/certs/32282.pem (1708 bytes)
	I0428 18:32:54.273925    5100 start.go:296] duration metric: took 4.744136s for postStartSetup
	I0428 18:32:54.274049    5100 fix.go:56] duration metric: took 1m25.5316046s for fixHost
	I0428 18:32:54.274160    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:32:56.306850    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:32:56.307120    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:56.307120    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:32:58.721421    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:32:58.721421    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:32:58.729781    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:32:58.729925    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:32:58.729925    5100 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 18:32:58.850694    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714354378.855254990
	
	I0428 18:32:58.850694    5100 fix.go:216] guest clock: 1714354378.855254990
	I0428 18:32:58.850694    5100 fix.go:229] Guest: 2024-04-28 18:32:58.85525499 -0700 PDT Remote: 2024-04-28 18:32:54.2740494 -0700 PDT m=+227.568030201 (delta=4.58120559s)
	I0428 18:32:58.850694    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:33:00.855861    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:33:00.855861    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:00.855943    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:33:03.353889    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:33:03.354496    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:03.359702    5100 main.go:141] libmachine: Using SSH client type: native
	I0428 18:33:03.360312    5100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf3a1c0] 0xf3cda0 <nil>  [] 0s} 172.27.237.37 22 <nil> <nil>}
	I0428 18:33:03.360312    5100 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714354378
	I0428 18:33:03.507702    5100 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 01:32:58 UTC 2024
	
	I0428 18:33:03.507776    5100 fix.go:236] clock set: Mon Apr 29 01:32:58 UTC 2024
	 (err=<nil>)
	I0428 18:33:03.507822    5100 start.go:83] releasing machines lock for "multinode-788600-m02", held for 1m34.7655374s
	I0428 18:33:03.508023    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:33:05.461328    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:33:05.461328    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:05.461328    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:33:07.913230    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:33:07.913475    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:07.916681    5100 out.go:177] * Found network options:
	I0428 18:33:07.927793    5100 out.go:177]   - NO_PROXY=172.27.239.170
	W0428 18:33:07.930394    5100 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 18:33:07.933609    5100 out.go:177]   - NO_PROXY=172.27.239.170
	W0428 18:33:07.935889    5100 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 18:33:07.937225    5100 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 18:33:07.940076    5100 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 18:33:07.940160    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:33:07.950375    5100 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0428 18:33:07.950375    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:33:10.019451    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:33:10.019451    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:10.019451    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:33:10.050724    5100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:33:10.051108    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:10.051210    5100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:33:12.565621    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:33:12.565621    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:12.566812    5100 sshutil.go:53] new ssh client: &{IP:172.27.237.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:33:12.598545    5100 main.go:141] libmachine: [stdout =====>] : 172.27.237.37
	
	I0428 18:33:12.598640    5100 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:33:12.598771    5100 sshutil.go:53] new ssh client: &{IP:172.27.237.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:33:12.664665    5100 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0428 18:33:12.665276    5100 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7148894s)
	W0428 18:33:12.665374    5100 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 18:33:12.679974    5100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 18:33:12.789857    5100 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0428 18:33:12.790010    5100 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0428 18:33:12.790010    5100 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8497669s)
	I0428 18:33:12.790010    5100 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 18:33:12.790010    5100 start.go:494] detecting cgroup driver to use...
	I0428 18:33:12.790288    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 18:33:12.826620    5100 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0428 18:33:12.841093    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0428 18:33:12.871023    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0428 18:33:12.892178    5100 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0428 18:33:12.905247    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0428 18:33:12.938633    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 18:33:12.970304    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0428 18:33:13.001024    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0428 18:33:13.032485    5100 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 18:33:13.065419    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0428 18:33:13.096245    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0428 18:33:13.128214    5100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0428 18:33:13.166014    5100 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 18:33:13.183104    5100 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0428 18:33:13.193636    5100 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 18:33:13.223445    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:33:13.433968    5100 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0428 18:33:13.467059    5100 start.go:494] detecting cgroup driver to use...
	I0428 18:33:13.481994    5100 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0428 18:33:13.506238    5100 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0428 18:33:13.506238    5100 command_runner.go:130] > [Unit]
	I0428 18:33:13.506238    5100 command_runner.go:130] > Description=Docker Application Container Engine
	I0428 18:33:13.506238    5100 command_runner.go:130] > Documentation=https://docs.docker.com
	I0428 18:33:13.506238    5100 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0428 18:33:13.506238    5100 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0428 18:33:13.506238    5100 command_runner.go:130] > StartLimitBurst=3
	I0428 18:33:13.506238    5100 command_runner.go:130] > StartLimitIntervalSec=60
	I0428 18:33:13.506238    5100 command_runner.go:130] > [Service]
	I0428 18:33:13.506238    5100 command_runner.go:130] > Type=notify
	I0428 18:33:13.506238    5100 command_runner.go:130] > Restart=on-failure
	I0428 18:33:13.506238    5100 command_runner.go:130] > Environment=NO_PROXY=172.27.239.170
	I0428 18:33:13.506238    5100 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0428 18:33:13.506238    5100 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0428 18:33:13.506238    5100 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0428 18:33:13.506238    5100 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0428 18:33:13.506238    5100 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0428 18:33:13.506238    5100 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0428 18:33:13.506238    5100 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0428 18:33:13.506238    5100 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0428 18:33:13.506238    5100 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0428 18:33:13.506238    5100 command_runner.go:130] > ExecStart=
	I0428 18:33:13.506238    5100 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0428 18:33:13.506238    5100 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0428 18:33:13.506238    5100 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0428 18:33:13.506238    5100 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0428 18:33:13.506238    5100 command_runner.go:130] > LimitNOFILE=infinity
	I0428 18:33:13.506238    5100 command_runner.go:130] > LimitNPROC=infinity
	I0428 18:33:13.506781    5100 command_runner.go:130] > LimitCORE=infinity
	I0428 18:33:13.506781    5100 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0428 18:33:13.506781    5100 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0428 18:33:13.506781    5100 command_runner.go:130] > TasksMax=infinity
	I0428 18:33:13.506781    5100 command_runner.go:130] > TimeoutStartSec=0
	I0428 18:33:13.506781    5100 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0428 18:33:13.506781    5100 command_runner.go:130] > Delegate=yes
	I0428 18:33:13.506781    5100 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0428 18:33:13.506781    5100 command_runner.go:130] > KillMode=process
	I0428 18:33:13.506781    5100 command_runner.go:130] > [Install]
	I0428 18:33:13.506781    5100 command_runner.go:130] > WantedBy=multi-user.target
	I0428 18:33:13.520708    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 18:33:13.558375    5100 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 18:33:13.617753    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 18:33:13.659116    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 18:33:13.695731    5100 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0428 18:33:13.761229    5100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0428 18:33:13.785450    5100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 18:33:13.821474    5100 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0428 18:33:13.835113    5100 ssh_runner.go:195] Run: which cri-dockerd
	I0428 18:33:13.845616    5100 command_runner.go:130] > /usr/bin/cri-dockerd
	I0428 18:33:13.860160    5100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0428 18:33:13.876613    5100 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0428 18:33:13.922608    5100 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0428 18:33:14.133089    5100 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0428 18:33:14.319723    5100 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0428 18:33:14.319858    5100 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0428 18:33:14.365706    5100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 18:33:14.564799    5100 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0428 18:34:15.692524    5100 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0428 18:34:15.692592    5100 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0428 18:34:15.692592    5100 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1276455s)
	I0428 18:34:15.705979    5100 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0428 18:34:15.728446    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0428 18:34:15.728577    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.809969396Z" level=info msg="Starting up"
	I0428 18:34:15.728577    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.810971814Z" level=info msg="containerd not running, starting managed containerd"
	I0428 18:34:15.728675    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.812287837Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	I0428 18:34:15.728675    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.847769870Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0428 18:34:15.728675    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.874938755Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0428 18:34:15.728747    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875097458Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0428 18:34:15.728747    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875160459Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0428 18:34:15.728818    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875177259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.728840    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875749069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0428 18:34:15.728840    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875908772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.728929    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876188877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0428 18:34:15.728929    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876290779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.728929    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876312679Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0428 18:34:15.729020    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876324280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.877036692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.877872507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881632774Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881737076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881892779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881991681Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883069900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883201902Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883221703Z" level=info msg="metadata content store policy set" policy=shared
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900315007Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900509811Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900578112Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900636113Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900666214Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900753815Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901202723Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901383226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901578330Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901609830Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901628931Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729049    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901645731Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729588    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901661531Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729588    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901678632Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729588    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901695332Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729588    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901717232Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729588    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901736033Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729827    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901751733Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0428 18:34:15.729827    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901782434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729827    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901801134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729827    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901817034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729827    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901832734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729938    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901848035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729938    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901869935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729938    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901884435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.729987    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901902536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730024    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901919336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730024    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901939336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730024    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901954637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730024    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901970337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730024    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901985537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730108    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902004338Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0428 18:34:15.730108    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902045138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730108    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902061339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730108    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902075139Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0428 18:34:15.730108    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902212941Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0428 18:34:15.730212    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902320843Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0428 18:34:15.730212    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902341244Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0428 18:34:15.730212    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902354644Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0428 18:34:15.730212    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902423045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0428 18:34:15.730212    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902464146Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0428 18:34:15.730378    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902479446Z" level=info msg="NRI interface is disabled by configuration."
	I0428 18:34:15.730378    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903415363Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903706068Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903861271Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.904299478Z" level=info msg="containerd successfully booted in 0.059611s"
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:48 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:48.876990250Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:48 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:48.969290393Z" level=info msg="Loading containers: start."
	I0428 18:34:15.730454    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.292494295Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0428 18:34:15.730591    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.376103508Z" level=info msg="Loading containers: done."
	I0428 18:34:15.730591    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.420350009Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0428 18:34:15.730591    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.421214025Z" level=info msg="Daemon has completed initialization"
	I0428 18:34:15.730646    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.531900928Z" level=info msg="API listen on /var/run/docker.sock"
	I0428 18:34:15.730646    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.531988129Z" level=info msg="API listen on [::]:2376"
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:32:49 multinode-788600-m02 systemd[1]: Started Docker Application Container Engine.
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.594905647Z" level=info msg="Processing signal 'terminated'"
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.597013752Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.597659553Z" level=info msg="Daemon shutdown complete"
	I0428 18:34:15.730687    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.598156755Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0428 18:34:15.730786    5100 command_runner.go:130] > Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.598169255Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0428 18:34:15.730786    5100 command_runner.go:130] > Apr 29 01:33:15 multinode-788600-m02 systemd[1]: docker.service: Deactivated successfully.
	I0428 18:34:15.730786    5100 command_runner.go:130] > Apr 29 01:33:15 multinode-788600-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0428 18:34:15.730786    5100 command_runner.go:130] > Apr 29 01:33:15 multinode-788600-m02 systemd[1]: Starting Docker Application Container Engine...
	I0428 18:34:15.730910    5100 command_runner.go:130] > Apr 29 01:33:15 multinode-788600-m02 dockerd[1045]: time="2024-04-29T01:33:15.672598455Z" level=info msg="Starting up"
	I0428 18:34:15.730910    5100 command_runner.go:130] > Apr 29 01:34:15 multinode-788600-m02 dockerd[1045]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0428 18:34:15.730910    5100 command_runner.go:130] > Apr 29 01:34:15 multinode-788600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0428 18:34:15.730910    5100 command_runner.go:130] > Apr 29 01:34:15 multinode-788600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0428 18:34:15.730910    5100 command_runner.go:130] > Apr 29 01:34:15 multinode-788600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0428 18:34:15.739602    5100 out.go:177] 
	W0428 18:34:15.742382    5100 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 01:32:47 multinode-788600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.809969396Z" level=info msg="Starting up"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.810971814Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:47.812287837Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.847769870Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.874938755Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875097458Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875160459Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875177259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875749069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.875908772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876188877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876290779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876312679Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.876324280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.877036692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.877872507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881632774Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881737076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881892779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.881991681Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883069900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883201902Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.883221703Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900315007Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900509811Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900578112Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900636113Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900666214Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.900753815Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901202723Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901383226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901578330Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901609830Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901628931Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901645731Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901661531Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901678632Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901695332Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901717232Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901736033Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901751733Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901782434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901801134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901817034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901832734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901848035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901869935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901884435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901902536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901919336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901939336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901954637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901970337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.901985537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902004338Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902045138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902061339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902075139Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902212941Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902320843Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902341244Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902354644Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902423045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902464146Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.902479446Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903415363Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903706068Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.903861271Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 01:32:47 multinode-788600-m02 dockerd[668]: time="2024-04-29T01:32:47.904299478Z" level=info msg="containerd successfully booted in 0.059611s"
	Apr 29 01:32:48 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:48.876990250Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 01:32:48 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:48.969290393Z" level=info msg="Loading containers: start."
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.292494295Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.376103508Z" level=info msg="Loading containers: done."
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.420350009Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.421214025Z" level=info msg="Daemon has completed initialization"
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.531900928Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 01:32:49 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:32:49.531988129Z" level=info msg="API listen on [::]:2376"
	Apr 29 01:32:49 multinode-788600-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 01:33:14 multinode-788600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.594905647Z" level=info msg="Processing signal 'terminated'"
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.597013752Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.597659553Z" level=info msg="Daemon shutdown complete"
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.598156755Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 01:33:14 multinode-788600-m02 dockerd[662]: time="2024-04-29T01:33:14.598169255Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 01:33:15 multinode-788600-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 01:33:15 multinode-788600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 01:33:15 multinode-788600-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 01:33:15 multinode-788600-m02 dockerd[1045]: time="2024-04-29T01:33:15.672598455Z" level=info msg="Starting up"
	Apr 29 01:34:15 multinode-788600-m02 dockerd[1045]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 01:34:15 multinode-788600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 01:34:15 multinode-788600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 01:34:15 multinode-788600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0428 18:34:15.742938    5100 out.go:239] * 
	W0428 18:34:15.744099    5100 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0428 18:34:15.746768    5100 out.go:177] 
	
	
	==> Docker <==
	Apr 29 01:31:20 multinode-788600 cri-dockerd[1283]: time="2024-04-29T01:31:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/faa3aa22a49af636bcdb5899779442ac222d821a7fa50dd30cd32fa6402bf907/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 01:31:21 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:21.048222904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 01:31:21 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:21.048349906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 01:31:21 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:21.048369306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:31:21 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:21.048583609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:31:44 multinode-788600 dockerd[1056]: time="2024-04-29T01:31:44.200185664Z" level=info msg="ignoring event" container=095a245b1d2bf21636ffde23dd5c5870384c2efe0fde1ff21c738d02ecbad189 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 01:31:44 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:44.201210477Z" level=info msg="shim disconnected" id=095a245b1d2bf21636ffde23dd5c5870384c2efe0fde1ff21c738d02ecbad189 namespace=moby
	Apr 29 01:31:44 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:44.202700296Z" level=warning msg="cleaning up after shim disconnected" id=095a245b1d2bf21636ffde23dd5c5870384c2efe0fde1ff21c738d02ecbad189 namespace=moby
	Apr 29 01:31:44 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:44.202976799Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 01:31:57 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:57.620883378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 01:31:57 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:57.621051281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 01:31:57 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:57.621073181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:31:57 multinode-788600 dockerd[1062]: time="2024-04-29T01:31:57.621271283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 01:34:36 multinode-788600 dockerd[1056]: 2024/04/29 01:34:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 01:34:36 multinode-788600 dockerd[1056]: 2024/04/29 01:34:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 01:34:36 multinode-788600 dockerd[1056]: 2024/04/29 01:34:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 01:34:36 multinode-788600 dockerd[1056]: 2024/04/29 01:34:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 01:34:37 multinode-788600 dockerd[1056]: 2024/04/29 01:34:37 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 01:34:37 multinode-788600 dockerd[1056]: 2024/04/29 01:34:37 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 01:34:37 multinode-788600 dockerd[1056]: 2024/04/29 01:34:37 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 01:34:37 multinode-788600 dockerd[1056]: 2024/04/29 01:34:37 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 01:34:37 multinode-788600 dockerd[1056]: 2024/04/29 01:34:37 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 01:34:37 multinode-788600 dockerd[1056]: 2024/04/29 01:34:37 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 01:34:37 multinode-788600 dockerd[1056]: 2024/04/29 01:34:37 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 01:34:37 multinode-788600 dockerd[1056]: 2024/04/29 01:34:37 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9a287b9d74963       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       2                   c9c6fe831ace4       storage-provisioner
	aac9ab11d8404       8c811b4aec35f                                                                                         4 minutes ago       Running             busybox                   1                   faa3aa22a49af       busybox-fc5497c4f-4qvlm
	871f1babd92ce       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   1                   dea947e4b267e       coredns-7db6d8ff4d-rp2lx
	a9806e7345fc9       4950bb10b3f87                                                                                         4 minutes ago       Running             kindnet-cni               1                   a2f37ed6a52fb       kindnet-52rrh
	b16bbceb6bdee       a0bf559e280cf                                                                                         4 minutes ago       Running             kube-proxy                1                   330975770c2cb       kube-proxy-bkkql
	095a245b1d2bf       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       1                   c9c6fe831ace4       storage-provisioner
	ace8dc8c78d56       c42f13656d0b2                                                                                         4 minutes ago       Running             kube-apiserver            0                   79616a5b9f290       kube-apiserver-multinode-788600
	22857de4092ae       c7aad43836fa5                                                                                         4 minutes ago       Running             kube-controller-manager   1                   b9e44b89472c5       kube-controller-manager-multinode-788600
	64707d485e51b       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   dfe4b0f43edfa       etcd-multinode-788600
	705d4c5c927e7       259c8277fcbbc                                                                                         4 minutes ago       Running             kube-scheduler            1                   a1f5f4944d7ec       kube-scheduler-multinode-788600
	d0d5fbf9b871e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   fcbd24a1db2d8       busybox-fc5497c4f-4qvlm
	64e6fcf4a3f2f       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   70af634f6134d       coredns-7db6d8ff4d-rp2lx
	33e59494d8be9       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              26 minutes ago      Exited              kindnet-cni               0                   d1342e9d71114       kindnet-52rrh
	8542b2c39cf5b       a0bf559e280cf                                                                                         26 minutes ago      Exited              kube-proxy                0                   776d075f3716e       kube-proxy-bkkql
	d55fefd692cfc       259c8277fcbbc                                                                                         27 minutes ago      Exited              kube-scheduler            0                   26381d4606b51       kube-scheduler-multinode-788600
	edb2c636ad5d7       c7aad43836fa5                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   9ffe1b8b41e4c       kube-controller-manager-multinode-788600
	
	
	==> coredns [64e6fcf4a3f2] <==
	[INFO] 10.244.0.3:53871 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001397s
	[INFO] 10.244.0.3:34178 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001399s
	[INFO] 10.244.0.3:59684 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001391s
	[INFO] 10.244.0.3:35758 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0003144s
	[INFO] 10.244.0.3:54201 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000513s
	[INFO] 10.244.0.3:57683 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000876s
	[INFO] 10.244.0.3:49694 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001237s
	[INFO] 10.244.1.2:48711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229s
	[INFO] 10.244.1.2:37460 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001261s
	[INFO] 10.244.1.2:32950 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001014s
	[INFO] 10.244.1.2:49157 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000511s
	[INFO] 10.244.0.3:49454 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003908s
	[INFO] 10.244.0.3:56632 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000654s
	[INFO] 10.244.0.3:51203 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000936s
	[INFO] 10.244.0.3:53433 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001697s
	[INFO] 10.244.1.2:54748 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001237s
	[INFO] 10.244.1.2:55201 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002599s
	[INFO] 10.244.1.2:45426 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000815s
	[INFO] 10.244.1.2:49822 - 5 "PTR IN 1.224.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001063s
	[INFO] 10.244.0.3:38954 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118s
	[INFO] 10.244.0.3:58102 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002236s
	[INFO] 10.244.0.3:48832 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001238s
	[INFO] 10.244.0.3:49749 - 5 "PTR IN 1.224.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001072s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [871f1babd92c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fb4ceeca2eb53dd6d0d82a0cb1df02d14ea612846284d5a80845f7b5ec07f1ef2c951631ec8df748f439117d36661751b43c1cc4b2fa4270be8574cc8fc671e5
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35167 - 57024 "HINFO IN 6138708222212467430.87596895660326264. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.027490357s
	
	
	==> describe nodes <==
	Name:               multinode-788600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-788600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=multinode-788600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T18_08_50_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 01:08:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-788600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 01:35:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 01:31:16 +0000   Mon, 29 Apr 2024 01:08:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 01:31:16 +0000   Mon, 29 Apr 2024 01:08:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 01:31:16 +0000   Mon, 29 Apr 2024 01:08:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 01:31:16 +0000   Mon, 29 Apr 2024 01:31:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.239.170
	  Hostname:    multinode-788600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad4b84e5f48240b1ba6c29345f8a41f7
	  System UUID:                6f78c2a9-1744-3642-a944-13bbeb7f5c76
	  Boot ID:                    5454e797-3a96-4b7c-aeb3-6a513f59521a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4qvlm                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-rp2lx                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-788600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m52s
	  kube-system                 kindnet-52rrh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-788600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-controller-manager-multinode-788600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-bkkql                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-788600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 26m                    kube-proxy       
	  Normal  Starting                 4m50s                  kube-proxy       
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x6 over 27m)      kubelet          Node multinode-788600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x6 over 27m)      kubelet          Node multinode-788600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x6 over 27m)      kubelet          Node multinode-788600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-788600 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-788600 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-788600 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-788600 event: Registered Node multinode-788600 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-788600 status is now: NodeReady
	  Normal  Starting                 4m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m58s (x8 over 4m58s)  kubelet          Node multinode-788600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s (x8 over 4m58s)  kubelet          Node multinode-788600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m58s (x7 over 4m58s)  kubelet          Node multinode-788600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m40s                  node-controller  Node multinode-788600 event: Registered Node multinode-788600 in Controller
	
	
	Name:               multinode-788600-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-788600-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=multinode-788600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_28T18_11_53_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 01:11:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-788600-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 01:28:05 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 01:23:05 +0000   Mon, 29 Apr 2024 01:32:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 01:23:05 +0000   Mon, 29 Apr 2024 01:32:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 01:23:05 +0000   Mon, 29 Apr 2024 01:32:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 01:23:05 +0000   Mon, 29 Apr 2024 01:32:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.27.230.221
	  Hostname:    multinode-788600-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f3f256c1ef74f1aabdee6846e11e827
	  System UUID:                ea348b67-6b29-8b46-84e3-ebf01858b203
	  Boot ID:                    23d1db59-b5c6-484d-aa22-1e61e2ff3b17
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4fdn6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-hnvm4              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-kc8c4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24m                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-788600-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-788600-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-788600-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m                node-controller  Node multinode-788600-m02 event: Registered Node multinode-788600-m02 in Controller
	  Normal  NodeReady                23m                kubelet          Node multinode-788600-m02 status is now: NodeReady
	  Normal  RegisteredNode           4m41s              node-controller  Node multinode-788600-m02 event: Registered Node multinode-788600-m02 in Controller
	  Normal  NodeNotReady             4m1s               node-controller  Node multinode-788600-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +1.336669] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.221212] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.025561] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr29 01:30] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.106257] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.071543] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[ +25.571821] systemd-fstab-generator[982]: Ignoring "noauto" option for root device
	[  +0.114126] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.563040] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	[  +0.196534] systemd-fstab-generator[1034]: Ignoring "noauto" option for root device
	[  +0.227945] systemd-fstab-generator[1048]: Ignoring "noauto" option for root device
	[Apr29 01:31] systemd-fstab-generator[1235]: Ignoring "noauto" option for root device
	[  +0.197785] systemd-fstab-generator[1247]: Ignoring "noauto" option for root device
	[  +0.196086] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	[  +0.277819] systemd-fstab-generator[1274]: Ignoring "noauto" option for root device
	[  +0.898631] systemd-fstab-generator[1388]: Ignoring "noauto" option for root device
	[  +0.107432] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.348423] systemd-fstab-generator[1529]: Ignoring "noauto" option for root device
	[  +2.115772] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.067702] kauditd_printk_skb: 10 callbacks suppressed
	[  +3.690885] systemd-fstab-generator[2336]: Ignoring "noauto" option for root device
	[  +3.420808] kauditd_printk_skb: 70 callbacks suppressed
	[ +13.045790] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [64707d485e51] <==
	{"level":"info","ts":"2024-04-29T01:31:08.590486Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T01:31:08.590514Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T01:31:08.591082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"922ab80e8fb68af1 switched to configuration voters=(10532433051239484145)"}
	{"level":"info","ts":"2024-04-29T01:31:08.594273Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T01:31:08.594551Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"557f475d665ae496","local-member-id":"922ab80e8fb68af1","added-peer-id":"922ab80e8fb68af1","added-peer-peer-urls":["https://172.27.231.169:2380"]}
	{"level":"info","ts":"2024-04-29T01:31:08.594873Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"557f475d665ae496","local-member-id":"922ab80e8fb68af1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T01:31:08.594932Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T01:31:08.595404Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.27.239.170:2380"}
	{"level":"info","ts":"2024-04-29T01:31:08.603393Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"922ab80e8fb68af1","initial-advertise-peer-urls":["https://172.27.239.170:2380"],"listen-peer-urls":["https://172.27.239.170:2380"],"advertise-client-urls":["https://172.27.239.170:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.239.170:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T01:31:08.603549Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T01:31:08.60385Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.27.239.170:2380"}
	{"level":"info","ts":"2024-04-29T01:31:09.939198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"922ab80e8fb68af1 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T01:31:09.939277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"922ab80e8fb68af1 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T01:31:09.939326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"922ab80e8fb68af1 received MsgPreVoteResp from 922ab80e8fb68af1 at term 2"}
	{"level":"info","ts":"2024-04-29T01:31:09.939343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"922ab80e8fb68af1 became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T01:31:09.93935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"922ab80e8fb68af1 received MsgVoteResp from 922ab80e8fb68af1 at term 3"}
	{"level":"info","ts":"2024-04-29T01:31:09.93936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"922ab80e8fb68af1 became leader at term 3"}
	{"level":"info","ts":"2024-04-29T01:31:09.939374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 922ab80e8fb68af1 elected leader 922ab80e8fb68af1 at term 3"}
	{"level":"info","ts":"2024-04-29T01:31:09.947217Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"922ab80e8fb68af1","local-member-attributes":"{Name:multinode-788600 ClientURLs:[https://172.27.239.170:2379]}","request-path":"/0/members/922ab80e8fb68af1/attributes","cluster-id":"557f475d665ae496","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T01:31:09.94725Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T01:31:09.947267Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T01:31:09.948622Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T01:31:09.948642Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T01:31:09.95067Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T01:31:09.95067Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.239.170:2379"}
	
	
	==> kernel <==
	 01:36:05 up 6 min,  0 users,  load average: 0.76, 0.41, 0.18
	Linux multinode-788600 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [33e59494d8be] <==
	I0429 01:28:03.302125       1 main.go:250] Node multinode-788600-m03 has CIDR [10.244.3.0/24] 
	I0429 01:28:13.311628       1 main.go:223] Handling node with IPs: map[172.27.231.169:{}]
	I0429 01:28:13.311750       1 main.go:227] handling current node
	I0429 01:28:13.311809       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:28:13.311821       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:28:13.312461       1 main.go:223] Handling node with IPs: map[172.27.237.64:{}]
	I0429 01:28:13.312599       1 main.go:250] Node multinode-788600-m03 has CIDR [10.244.3.0/24] 
	I0429 01:28:23.327565       1 main.go:223] Handling node with IPs: map[172.27.231.169:{}]
	I0429 01:28:23.327670       1 main.go:227] handling current node
	I0429 01:28:23.327685       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:28:23.327693       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:28:23.328051       1 main.go:223] Handling node with IPs: map[172.27.237.64:{}]
	I0429 01:28:23.328081       1 main.go:250] Node multinode-788600-m03 has CIDR [10.244.3.0/24] 
	I0429 01:28:33.338514       1 main.go:223] Handling node with IPs: map[172.27.231.169:{}]
	I0429 01:28:33.338596       1 main.go:227] handling current node
	I0429 01:28:33.338609       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:28:33.338616       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:28:33.339035       1 main.go:223] Handling node with IPs: map[172.27.237.64:{}]
	I0429 01:28:33.339064       1 main.go:250] Node multinode-788600-m03 has CIDR [10.244.3.0/24] 
	I0429 01:28:43.358460       1 main.go:223] Handling node with IPs: map[172.27.231.169:{}]
	I0429 01:28:43.358485       1 main.go:227] handling current node
	I0429 01:28:43.358495       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:28:43.358501       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:28:43.358607       1 main.go:223] Handling node with IPs: map[172.27.237.64:{}]
	I0429 01:28:43.358615       1 main.go:250] Node multinode-788600-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [a9806e7345fc] <==
	I0429 01:35:05.474463       1 main.go:227] handling current node
	I0429 01:35:05.474495       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:35:05.474504       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:35:05.475185       1 main.go:223] Handling node with IPs: map[172.27.237.64:{}]
	I0429 01:35:05.475283       1 main.go:250] Node multinode-788600-m03 has CIDR [10.244.3.0/24] 
	I0429 01:35:15.490247       1 main.go:223] Handling node with IPs: map[172.27.239.170:{}]
	I0429 01:35:15.490330       1 main.go:227] handling current node
	I0429 01:35:15.490344       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:35:15.490351       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:35:25.499671       1 main.go:223] Handling node with IPs: map[172.27.239.170:{}]
	I0429 01:35:25.499816       1 main.go:227] handling current node
	I0429 01:35:25.499830       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:35:25.499838       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:35:35.510353       1 main.go:223] Handling node with IPs: map[172.27.239.170:{}]
	I0429 01:35:35.510441       1 main.go:227] handling current node
	I0429 01:35:35.510457       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:35:35.510465       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:35:45.521971       1 main.go:223] Handling node with IPs: map[172.27.239.170:{}]
	I0429 01:35:45.522104       1 main.go:227] handling current node
	I0429 01:35:45.522119       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:35:45.522128       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	I0429 01:35:55.530582       1 main.go:223] Handling node with IPs: map[172.27.239.170:{}]
	I0429 01:35:55.530629       1 main.go:227] handling current node
	I0429 01:35:55.530643       1 main.go:223] Handling node with IPs: map[172.27.230.221:{}]
	I0429 01:35:55.530651       1 main.go:250] Node multinode-788600-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [ace8dc8c78d5] <==
	I0429 01:31:11.670353       1 aggregator.go:165] initial CRD sync complete...
	I0429 01:31:11.670367       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 01:31:11.670374       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 01:31:11.670381       1 cache.go:39] Caches are synced for autoregister controller
	I0429 01:31:11.713290       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 01:31:11.719044       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 01:31:11.719588       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 01:31:11.719870       1 policy_source.go:224] refreshing policies
	I0429 01:31:11.721136       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 01:31:11.721383       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 01:31:11.721444       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 01:31:11.726066       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 01:31:11.732819       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 01:31:11.736360       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 01:31:11.754183       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 01:31:12.531066       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0429 01:31:13.251224       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.231.169 172.27.239.170]
	I0429 01:31:13.254587       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 01:31:13.287928       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 01:31:14.626912       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 01:31:14.850074       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 01:31:14.883026       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 01:31:15.050651       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 01:31:15.073275       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0429 01:31:33.250172       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.239.170]
	
	
	==> kube-controller-manager [22857de4092a] <==
	I0429 01:31:24.420567       1 shared_informer.go:320] Caches are synced for TTL
	I0429 01:31:24.433566       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0429 01:31:24.454141       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0429 01:31:24.456934       1 shared_informer.go:320] Caches are synced for node
	I0429 01:31:24.457178       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0429 01:31:24.457311       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0429 01:31:24.457340       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0429 01:31:24.457349       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0429 01:31:24.480368       1 shared_informer.go:320] Caches are synced for taint
	I0429 01:31:24.481351       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0429 01:31:24.511463       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-788600"
	I0429 01:31:24.511797       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-788600-m02"
	I0429 01:31:24.517067       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-788600-m03"
	I0429 01:31:24.517148       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0429 01:31:24.522249       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 01:31:24.523170       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 01:31:24.951816       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 01:31:24.951855       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 01:31:24.960529       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 01:32:04.630813       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.212701ms"
	I0429 01:32:04.632363       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.4µs"
	I0429 01:36:04.389046       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-ms872"
	I0429 01:36:04.453300       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-ms872"
	I0429 01:36:04.453407       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-sjsfc"
	I0429 01:36:04.525164       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-sjsfc"
	
	
	==> kube-controller-manager [edb2c636ad5d] <==
	I0429 01:09:14.942008       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.7µs"
	I0429 01:09:17.024665       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 01:11:53.161790       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-788600-m02\" does not exist"
	I0429 01:11:53.177770       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-788600-m02" podCIDRs=["10.244.1.0/24"]
	I0429 01:11:57.056826       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-788600-m02"
	I0429 01:12:12.447989       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-788600-m02"
	I0429 01:12:38.086505       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.050872ms"
	I0429 01:12:38.156586       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.927316ms"
	I0429 01:12:38.156985       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.8µs"
	I0429 01:12:40.843412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.957702ms"
	I0429 01:12:40.844132       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.3µs"
	I0429 01:12:40.953439       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.253802ms"
	I0429 01:12:40.953522       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.8µs"
	I0429 01:16:25.628360       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-788600-m02"
	I0429 01:16:25.629372       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-788600-m03\" does not exist"
	I0429 01:16:25.644835       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-788600-m03" podCIDRs=["10.244.2.0/24"]
	I0429 01:16:27.127052       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-788600-m03"
	I0429 01:16:44.649366       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-788600-m02"
	I0429 01:24:07.261198       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-788600-m02"
	I0429 01:26:40.701566       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-788600-m02"
	I0429 01:26:46.734897       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-788600-m03\" does not exist"
	I0429 01:26:46.736292       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-788600-m02"
	I0429 01:26:46.764001       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-788600-m03" podCIDRs=["10.244.3.0/24"]
	I0429 01:26:54.696904       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-788600-m02"
	I0429 01:28:22.452429       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-788600-m02"
	
	
	==> kube-proxy [8542b2c39cf5] <==
	I0429 01:09:05.708863       1 server_linux.go:69] "Using iptables proxy"
	I0429 01:09:05.742050       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.231.169"]
	I0429 01:09:05.825870       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 01:09:05.825916       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 01:09:05.826023       1 server_linux.go:165] "Using iptables Proxier"
	I0429 01:09:05.838937       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 01:09:05.840502       1 server.go:872] "Version info" version="v1.30.0"
	I0429 01:09:05.840525       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 01:09:05.843961       1 config.go:192] "Starting service config controller"
	I0429 01:09:05.846365       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 01:09:05.846409       1 config.go:319] "Starting node config controller"
	I0429 01:09:05.846416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 01:09:05.849462       1 config.go:101] "Starting endpoint slice config controller"
	I0429 01:09:05.849804       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 01:09:05.946586       1 shared_informer.go:320] Caches are synced for node config
	I0429 01:09:05.946631       1 shared_informer.go:320] Caches are synced for service config
	I0429 01:09:05.953363       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [b16bbceb6bde] <==
	I0429 01:31:14.456633       1 server_linux.go:69] "Using iptables proxy"
	I0429 01:31:14.508160       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.239.170"]
	I0429 01:31:14.653659       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 01:31:14.653749       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 01:31:14.653771       1 server_linux.go:165] "Using iptables Proxier"
	I0429 01:31:14.664302       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 01:31:14.666172       1 server.go:872] "Version info" version="v1.30.0"
	I0429 01:31:14.666194       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 01:31:14.669815       1 config.go:192] "Starting service config controller"
	I0429 01:31:14.671494       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 01:31:14.671761       1 config.go:101] "Starting endpoint slice config controller"
	I0429 01:31:14.672103       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 01:31:14.672303       1 config.go:319] "Starting node config controller"
	I0429 01:31:14.678976       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 01:31:14.772647       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 01:31:14.772720       1 shared_informer.go:320] Caches are synced for service config
	I0429 01:31:14.779371       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [705d4c5c927e] <==
	I0429 01:31:09.468784       1 serving.go:380] Generated self-signed cert in-memory
	W0429 01:31:11.642384       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 01:31:11.642434       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 01:31:11.642447       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 01:31:11.642454       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 01:31:11.677884       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 01:31:11.677974       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 01:31:11.680797       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 01:31:11.680837       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 01:31:11.681224       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 01:31:11.684058       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 01:31:11.781602       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d55fefd692cf] <==
	E0429 01:08:46.888518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 01:08:47.003501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 01:08:47.003561       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 01:08:47.057469       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 01:08:47.059611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 01:08:47.081787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 01:08:47.082341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 01:08:47.119979       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 01:08:47.120206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 01:08:47.214340       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 01:08:47.214395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 01:08:47.226615       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 01:08:47.226976       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 01:08:47.234210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 01:08:47.234301       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 01:08:47.252946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 01:08:47.253198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 01:08:47.278229       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 01:08:47.278421       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 01:08:47.396441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 01:08:47.396483       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 01:08:47.456293       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 01:08:47.456674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0429 01:08:49.334502       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 01:28:45.556004       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 29 01:31:44 multinode-788600 kubelet[1536]: I0429 01:31:44.812614    1536 scope.go:117] "RemoveContainer" containerID="095a245b1d2bf21636ffde23dd5c5870384c2efe0fde1ff21c738d02ecbad189"
	Apr 29 01:31:44 multinode-788600 kubelet[1536]: E0429 01:31:44.812917    1536 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(04bc447a-c711-4c23-ad4b-db5fd32b28d2)\"" pod="kube-system/storage-provisioner" podUID="04bc447a-c711-4c23-ad4b-db5fd32b28d2"
	Apr 29 01:31:57 multinode-788600 kubelet[1536]: I0429 01:31:57.426875    1536 scope.go:117] "RemoveContainer" containerID="095a245b1d2bf21636ffde23dd5c5870384c2efe0fde1ff21c738d02ecbad189"
	Apr 29 01:32:06 multinode-788600 kubelet[1536]: I0429 01:32:06.400415    1536 scope.go:117] "RemoveContainer" containerID="e148c0cdbae012e13553185eaf9647e7246c72513d9635d3374eb7ff14f06607"
	Apr 29 01:32:06 multinode-788600 kubelet[1536]: I0429 01:32:06.451784    1536 scope.go:117] "RemoveContainer" containerID="27388b03fb268ba63831b1854067c0397773cf8e5fd633f335a773b88f2779ee"
	Apr 29 01:32:06 multinode-788600 kubelet[1536]: E0429 01:32:06.453345    1536 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 01:32:06 multinode-788600 kubelet[1536]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 01:32:06 multinode-788600 kubelet[1536]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 01:32:06 multinode-788600 kubelet[1536]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 01:32:06 multinode-788600 kubelet[1536]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 01:33:06 multinode-788600 kubelet[1536]: E0429 01:33:06.452130    1536 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 01:33:06 multinode-788600 kubelet[1536]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 01:33:06 multinode-788600 kubelet[1536]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 01:33:06 multinode-788600 kubelet[1536]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 01:33:06 multinode-788600 kubelet[1536]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 01:34:06 multinode-788600 kubelet[1536]: E0429 01:34:06.449749    1536 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 01:34:06 multinode-788600 kubelet[1536]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 01:34:06 multinode-788600 kubelet[1536]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 01:34:06 multinode-788600 kubelet[1536]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 01:34:06 multinode-788600 kubelet[1536]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 01:35:06 multinode-788600 kubelet[1536]: E0429 01:35:06.451251    1536 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 01:35:06 multinode-788600 kubelet[1536]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 01:35:06 multinode-788600 kubelet[1536]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 01:35:06 multinode-788600 kubelet[1536]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 01:35:06 multinode-788600 kubelet[1536]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 18:35:57.076868   13044 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-788600 -n multinode-788600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-788600 -n multinode-788600: (11.9655931s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-788600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (88.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (10800.452s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.904561798.exe start -p running-upgrade-885700 --memory=2200 --vm-driver=hyperv
panic: test timed out after 3h0m0s
running tests:
	TestCertExpiration (6m6s)
	TestKubernetesUpgrade (15m22s)
	TestNetworkPlugins (15m22s)
	TestRunningBinaryUpgrade (33s)
	TestStartStop (4m30s)
	TestStoppedBinaryUpgrade (4m30s)
	TestStoppedBinaryUpgrade/Upgrade (4m29s)

                                                
                                                
goroutine 2154 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 7 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000161a00, 0xc000775bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0007944e0, {0x50ad540, 0x2a, 0x2a}, {0x2d78526?, 0xbb806f?, 0x50d0760?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000799ae0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000799ae0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 13 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000070b00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2107 [syscall, 7 minutes, locked to thread]:
syscall.SyscallN(0x28152a0?, {0xc000447b20?, 0xb17ea5?, 0x515dbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc000447b98?, 0xc000447b80?, 0xb0fdd6?, 0x515dbc0?, 0xc000447c08?, 0xb02985?, 0x1f666900a28?, 0xc000447b41?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x784, {0xc000c4153a?, 0x2c6, 0x2c6?}, 0xc000447c04?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0008b3188?, {0xc000c4153a?, 0x0?, 0x400?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0008b3188, {0xc000c4153a, 0x2c6, 0x2c6})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000910ed0, {0xc000c4153a?, 0xc47acd?, 0x13a?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002478270, {0x3ce1000, 0xc000a1a760})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3ce1140, 0xc002478270}, {0x3ce1000, 0xc000a1a760}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x4fd38e0?, {0x3ce1140, 0xc002478270})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xf?, {0x3ce1140?, 0xc002478270?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3ce1140, 0xc002478270}, {0x3ce10c0, 0xc000910ed0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x378aff0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 602
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 602 [syscall, 7 minutes, locked to thread]:
syscall.SyscallN(0x7ff894204de0?, {0xc0020b79a8?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x7c0, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0025ca5a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000c931e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000c931e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002206340, 0xc000c931e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc002206340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2c5
testing.tRunner(0xc002206340, 0x378aff0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1917 [chan receive, 15 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0026e6820)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0026e6820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0026e6820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0026e6820, 0xc000071280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1916
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 601 [chan receive, 15 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0022061a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0022061a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc0022061a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x92
testing.tRunner(0xc0022061a0, 0x378aff8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 87 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 86
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 1919 [chan receive, 15 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0026e6d00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0026e6d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0026e6d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0026e6d00, 0xc000071480)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1916
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 164 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000a10ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 139
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 165 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0022fc340, 0xc0001064e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 139
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2078 [chan receive, 5 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002102ea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002102ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002102ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc002102ea0, 0xc00057a940)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2072
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2108 [select, 7 minutes]:
os/exec.(*Cmd).watchCtx(0xc000c931e0, 0xc002592180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 602
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2079 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7ff894204de0?, {0xc000cbf6a8?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x6fc, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc002104870)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0008a4dc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0008a4dc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002103040, 0xc0008a4dc0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2.1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:183 +0x385
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc000cbfc20?, {0x3cee838, 0xc000128820}, 0x378c2c8, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:88 +0x132
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x0?, {0x3cee838?, 0xc000128820?}, 0x40?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc002067e28, 0x3b9aca00, 0x1a3185c5000, {0xc002067d08?, 0x2814be0?, 0xb4f288?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xef
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2(0xc002103040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:188 +0x2de
testing.tRunner(0xc002103040, 0xc00057a980)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1967
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1929 [chan receive, 15 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0026e6680)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0026e6680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc0026e6680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc0026e6680, 0x378b0f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2151 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc002073b20?, 0xb17ea5?, 0x515dbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xb02c41?, 0xc002073b80?, 0xb0fdd6?, 0x515dbc0?, 0xc002073c08?, 0xb02985?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x394, {0xc0007d620e?, 0x5f2, 0xc0007d6000?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0000ecf08?, {0xc0007d620e?, 0xb3c1be?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0000ecf08, {0xc0007d620e, 0x5f2, 0x5f2})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000910fe8, {0xc0007d620e?, 0xc002073d98?, 0x20e?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002594540, {0x3ce1000, 0xc0000a61c8})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3ce1140, 0xc002594540}, {0x3ce1000, 0xc0000a61c8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3ce1140, 0xc002594540})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xb00c36?, {0x3ce1140?, 0xc002594540?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3ce1140, 0xc002594540}, {0x3ce10c0, 0xc000910fe8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0022d0480?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1968
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2076 [chan receive, 5 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002102b60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002102b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002102b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc002102b60, 0xc00057a580)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2072
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 929 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0022fcc90, 0x32)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2814be0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00092b9e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0022fccc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009135c0, {0x3ce2440, 0xc00260e3c0}, 0x1, 0xc0001064e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009135c0, 0x3b9aca00, 0x0, 0x1, 0xc0001064e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 930
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1920 [chan receive, 15 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0026e6ea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0026e6ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0026e6ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0026e6ea0, 0xc000071580)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1916
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 183 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0022fc310, 0x3c)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2814be0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000a10a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0022fc340)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000912e20, {0x3ce2440, 0xc0008999b0}, 0x1, 0xc0001064e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000912e20, 0x3b9aca00, 0x0, 0x1, 0xc0001064e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 165
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 184 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3d05e40, 0xc0001064e0}, 0xc00076df50, 0xc00076df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3d05e40, 0xc0001064e0}, 0x0?, 0xc00076df50, 0xc00076df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3d05e40?, 0xc0001064e0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 165
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 185 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 184
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1989 [chan receive, 15 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0026e76c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0026e76c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0026e76c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0026e76c0, 0xc000071880)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1916
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2075 [chan receive, 5 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0021029c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0021029c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0021029c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0021029c0, 0xc00057a4c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2072
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1035 [chan send, 139 minutes]:
os/exec.(*Cmd).watchCtx(0xc002575760, 0xc002469d40)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 986
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 884 [chan send, 141 minutes]:
os/exec.(*Cmd).watchCtx(0xc0009af340, 0xc00092fe60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 779
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1968 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ff894204de0?, {0xc00006b798?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x714, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0025cb320)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000c93760)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000c93760)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002207ba0, 0xc000c93760)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc002207ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:275 +0x1445
testing.tRunner(0xc002207ba0, 0x378b0a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 946 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3d05e40, 0xc0001064e0}, 0xc0026bff50, 0xc0026bff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3d05e40, 0xc0001064e0}, 0x90?, 0xc0026bff50, 0xc0026bff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3d05e40?, 0xc0001064e0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0026bffd0?, 0xc8e404?, 0xc0024081c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 930
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1987 [chan receive, 15 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0026e7380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0026e7380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0026e7380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0026e7380, 0xc000071780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1916
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2153 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc000c93760, 0xc0001079e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1968
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1921 [chan receive, 15 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0026e7040)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0026e7040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0026e7040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0026e7040, 0xc000071680)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1916
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1988 [chan receive, 15 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0026e7520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0026e7520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0026e7520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0026e7520, 0xc000071800)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1916
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1964 [chan receive, 5 minutes]:
testing.(*T).Run(0xc002207520, {0x2d1c9f1?, 0xc47333?}, 0x378b2f8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc002207520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc002207520, 0x378b120)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 930 [chan receive, 141 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0022fccc0, 0xc0001064e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 804
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 675 [IO wait, 163 minutes]:
internal/poll.runtime_pollWait(0x1f66bd55358, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xb0fdd6?, 0x515dbc0?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc002210020, 0xc0024adbb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc002210008, 0x280, {0xc000c7a000?, 0x0?, 0x0?}, 0xc000101008?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc002210008, 0xc0024add90)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc002210008)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc0009a62a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0009a62a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0006ca0f0, {0x3cf8ee0, 0xc0009a62a0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0006ca0f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc002206000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 656
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 1897 [chan receive, 15 minutes]:
testing.(*T).Run(0xc002206d00, {0x2d1c9f1?, 0xb6f48d?}, 0xc0000081c8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc002206d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc002206d00, 0x378b0d8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2072 [chan receive, 5 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0021021a0, 0x378b2f8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1964
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2137 [syscall, locked to thread]:
syscall.SyscallN(0xc0007e9000?, {0xc002429b20?, 0xb17ea5?, 0x515dbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x1?, 0xc002429b80?, 0xb0fdd6?, 0x515dbc0?, 0xc002429c08?, 0xb02985?, 0x1f666900eb8?, 0x41?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7f0, {0xc000c419ff?, 0x201, 0xbb417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0008b3408?, {0xc000c419ff?, 0x0?, 0x400?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0008b3408, {0xc000c419ff, 0x201, 0x201})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000910ef0, {0xc000c419ff?, 0x1f66be319a8?, 0x68?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002318630, {0x3ce1000, 0xc0006ad090})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3ce1140, 0xc002318630}, {0x3ce1000, 0xc0006ad090}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x10?, {0x3ce1140, 0xc002318630})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc002429eb8?, {0x3ce1140?, 0xc002318630?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3ce1140, 0xc002318630}, {0x3ce10c0, 0xc000910ef0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00253e660?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1966
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2077 [chan receive, 5 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002102d00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002102d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002102d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc002102d00, 0xc00057a740)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2072
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1967 [chan receive, 5 minutes]:
testing.(*T).Run(0xc002207a00, {0x2d20991?, 0x3005753e800?}, 0xc00057a980)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc002207a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:160 +0x2bc
testing.tRunner(0xc002207a00, 0x378b128)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1918 [chan receive, 15 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0026e6b60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0026e6b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0026e6b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0026e6b60, 0xc000071380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1916
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2073 [chan receive, 5 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0021024e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0021024e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0021024e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0021024e0, 0xc00057a440)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2072
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2138 [syscall, locked to thread]:
syscall.SyscallN(0xb00c36?, {0xc002581b20?, 0xb17ea5?, 0x515dbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc00004c941?, 0xc002581b80?, 0xb0fdd6?, 0x515dbc0?, 0xc002581c08?, 0xb02985?, 0x1f666900598?, 0x35?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x748, {0xc0023c6200?, 0x200, 0xc0023c6200?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0008b3908?, {0xc0023c6200?, 0xb3c1be?, 0x200?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0008b3908, {0xc0023c6200, 0x200, 0x200})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000910f18, {0xc0023c6200?, 0xc002581d98?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002318660, {0x3ce1000, 0xc0000a6208})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3ce1140, 0xc002318660}, {0x3ce1000, 0xc0000a6208}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3ce1140, 0xc002318660})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xb00c36?, {0x3ce1140?, 0xc002318660?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3ce1140, 0xc002318660}, {0x3ce10c0, 0xc000910f18}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0024082c0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1966
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 897 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00092bbc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 804
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2139 [select]:
os/exec.(*Cmd).watchCtx(0xc0009266e0, 0xc00092fec0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1966
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1966 [syscall, locked to thread]:
syscall.SyscallN(0x7ff894204de0?, {0xc0020b36c0?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x560, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc002e94bd0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0009266e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0009266e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002207860, 0xc0009266e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade.func1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:120 +0x385
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc0020b3c38?, {0x3cee838, 0xc0009026e0}, 0x378c2c8, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:88 +0x132
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x0?, {0x3cee838?, 0xc0009026e0?}, 0x40?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc0020b3e08, 0x3b9aca00, 0x1a3185c5000, {0xc0020b3d10?, 0x2814be0?, 0x71386?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xef
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc002207860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:125 +0x4f4
testing.tRunner(0xc002207860, 0x378b100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 947 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 946
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1986 [chan receive, 15 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0026e71e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0026e71e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0026e71e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0026e71e0, 0xc000071700)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1916
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1916 [chan receive, 15 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0026e64e0, 0xc0000081c8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1897
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2152 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc002411b20?, 0xb17ea5?, 0x515dbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4d?, 0xc002411b80?, 0xb0fdd6?, 0x515dbc0?, 0xc002411c08?, 0xb02985?, 0x1f666900598?, 0xb78c77?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x480, {0xc00211420e?, 0x1df2, 0xbb417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002210a08?, {0xc00211420e?, 0xb3c1be?, 0x4000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002210a08, {0xc00211420e, 0x1df2, 0x1df2})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000911008, {0xc00211420e?, 0xc000cf3dc0?, 0x1e30?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002594570, {0x3ce1000, 0xc000c0a140})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3ce1140, 0xc002594570}, {0x3ce1000, 0xc000c0a140}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc002411e78?, {0x3ce1140, 0xc002594570})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc002411f38?, {0x3ce1140?, 0xc002594570?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3ce1140, 0xc002594570}, {0x3ce10c0, 0xc000911008}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00253e480?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1968
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1928 [chan receive, 15 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0026e61a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0026e61a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc0026e61a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc0026e61a0, 0x378b0e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2080 [syscall, locked to thread]:
syscall.SyscallN(0xc00006be80?, {0xc002b81b20?, 0xb17ea5?, 0x515dbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x2?, 0xc002b81b80?, 0xb0fdd6?, 0x515dbc0?, 0xc002b81c08?, 0xb02985?, 0x1f666900a28?, 0xc002206d4d?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x3c4, {0xc00244cb48?, 0x4b8, 0xbb417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc00093e788?, {0xc00244cb48?, 0xb3c1be?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00093e788, {0xc00244cb48, 0x4b8, 0x4b8})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000910fa8, {0xc00244cb48?, 0xc002b81d98?, 0x22f?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002168600, {0x3ce1000, 0xc000a1a860})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3ce1140, 0xc002168600}, {0x3ce1000, 0xc000a1a860}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3ce1140, 0xc002168600})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xb00c36?, {0x3ce1140?, 0xc002168600?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3ce1140, 0xc002168600}, {0x3ce10c0, 0xc000910fa8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002592120?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2079
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2106 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc002093b20?, 0xb17ea5?, 0x515dbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc8c441?, 0xc002093b80?, 0xb0fdd6?, 0x515dbc0?, 0xc002093c08?, 0xb02985?, 0x1f666900a28?, 0x4d?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x460, {0xc0021c2a46?, 0x5ba, 0xbb417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0008b2c88?, {0xc0021c2a46?, 0xb3c171?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0008b2c88, {0xc0021c2a46, 0x5ba, 0x5ba})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000910eb8, {0xc0021c2a46?, 0xc002289dc0?, 0x207?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002478240, {0x3ce1000, 0xc0000a6170})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3ce1140, 0xc002478240}, {0x3ce1000, 0xc0000a6170}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc002093e78?, {0x3ce1140, 0xc002478240})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc002093f38?, {0x3ce1140?, 0xc002478240?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3ce1140, 0xc002478240}, {0x3ce10c0, 0xc000910eb8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00253e540?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 602
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2081 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x2815360?, {0xc000773b20?, 0xb17ea5?, 0x515dbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xb02a41?, 0xc000773b80?, 0xb0fdd6?, 0x515dbc0?, 0xc000773c08?, 0xb02985?, 0x1f666900108?, 0xc0021fff35?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x730, {0xc0023c6000?, 0x200, 0xc0023c6000?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc00093ec88?, {0xc0023c6000?, 0xb3c1be?, 0x200?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00093ec88, {0xc0023c6000, 0x200, 0x200})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000910fc0, {0xc0023c6000?, 0xc0027cc8c0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002168630, {0x3ce1000, 0xc0000a6070})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3ce1140, 0xc002168630}, {0x3ce1000, 0xc0000a6070}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc000773e78?, {0x3ce1140, 0xc002168630})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000773f38?, {0x3ce1140?, 0xc002168630?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3ce1140, 0xc002168630}, {0x3ce10c0, 0xc000910fc0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002453f20?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2079
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2130 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc0008a4dc0, 0xc000107680)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2079
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2074 [chan receive, 5 minutes]:
testing.(*testContext).waitParallel(0xc0007923c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002102820)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002102820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002102820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc002102820, 0xc00057a480)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2072
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                    

Test pass (119/193)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.44
4 TestDownloadOnly/v1.20.0/preload-exists 0.01
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.21
9 TestDownloadOnly/v1.20.0/DeleteAll 1.46
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.39
12 TestDownloadOnly/v1.30.0/json-events 10.95
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.22
18 TestDownloadOnly/v1.30.0/DeleteAll 1.27
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 1.44
21 TestBinaryMirror 6.89
22 TestOffline 555.92
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.22
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.29
27 TestAddons/Setup 375.73
30 TestAddons/parallel/Ingress 63.34
31 TestAddons/parallel/InspektorGadget 27.68
32 TestAddons/parallel/MetricsServer 20.61
33 TestAddons/parallel/HelmTiller 28.19
35 TestAddons/parallel/CSI 90.38
36 TestAddons/parallel/Headlamp 36.23
37 TestAddons/parallel/CloudSpanner 21.83
38 TestAddons/parallel/LocalPath 39.99
39 TestAddons/parallel/NvidiaDevicePlugin 19.87
40 TestAddons/parallel/Yakd 6.02
43 TestAddons/serial/GCPAuth/Namespaces 0.36
44 TestAddons/StoppedEnableDisable 51.56
47 TestDockerFlags 250.03
49 TestForceSystemdEnv 639.53
56 TestErrorSpam/start 16.71
57 TestErrorSpam/status 35.64
58 TestErrorSpam/pause 22.31
59 TestErrorSpam/unpause 22.05
60 TestErrorSpam/stop 59.09
63 TestFunctional/serial/CopySyncFile 0.04
64 TestFunctional/serial/StartWithProxy 235.91
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 126.83
67 TestFunctional/serial/KubeContext 0.13
68 TestFunctional/serial/KubectlGetPods 0.25
71 TestFunctional/serial/CacheCmd/cache/add_remote 25.38
72 TestFunctional/serial/CacheCmd/cache/add_local 10.72
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.19
74 TestFunctional/serial/CacheCmd/cache/list 0.21
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 8.76
76 TestFunctional/serial/CacheCmd/cache/cache_reload 34.19
77 TestFunctional/serial/CacheCmd/cache/delete 0.4
78 TestFunctional/serial/MinikubeKubectlCmd 0.47
82 TestFunctional/serial/LogsCmd 168.59
83 TestFunctional/serial/LogsFileCmd 180.73
95 TestFunctional/parallel/AddonsCmd 0.6
98 TestFunctional/parallel/SSHCmd 18.89
99 TestFunctional/parallel/CpCmd 59.1
101 TestFunctional/parallel/FileSync 8.95
102 TestFunctional/parallel/CertSync 54.86
108 TestFunctional/parallel/NonActiveRuntimeDisabled 10.63
110 TestFunctional/parallel/License 3.24
111 TestFunctional/parallel/ProfileCmd/profile_not_create 12.13
112 TestFunctional/parallel/ProfileCmd/profile_list 11.11
113 TestFunctional/parallel/Version/short 0.17
114 TestFunctional/parallel/Version/components 7.28
115 TestFunctional/parallel/ProfileCmd/profile_json_output 11.06
121 TestFunctional/parallel/ImageCommands/Setup 4.68
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
143 TestFunctional/parallel/ImageCommands/ImageRemove 120.7
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 60.23
146 TestFunctional/parallel/UpdateContextCmd/no_changes 2.24
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.26
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.22
149 TestFunctional/delete_addon-resizer_images 0.47
150 TestFunctional/delete_my-image_image 0.19
151 TestFunctional/delete_minikube_cached_images 0.18
159 TestMultiControlPlane/serial/NodeLabels 0.17
167 TestImageBuild/serial/Setup 191.6
168 TestImageBuild/serial/NormalBuild 9.3
169 TestImageBuild/serial/BuildWithBuildArg 8.55
170 TestImageBuild/serial/BuildWithDockerIgnore 7.47
171 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.35
175 TestJSONOutput/start/Command 239.55
176 TestJSONOutput/start/Audit 0
178 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Command 7.56
182 TestJSONOutput/pause/Audit 0
184 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Command 7.53
188 TestJSONOutput/unpause/Audit 0
190 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/stop/Command 33.63
194 TestJSONOutput/stop/Audit 0
196 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
198 TestErrorJSONOutput 1.42
203 TestMainNoArgs 0.21
204 TestMinikubeProfile 505.74
207 TestMountStart/serial/StartWithMountFirst 151.46
208 TestMountStart/serial/VerifyMountFirst 9.12
209 TestMountStart/serial/StartWithMountSecond 149.05
210 TestMountStart/serial/VerifyMountSecond 9.11
211 TestMountStart/serial/DeleteFirst 26.55
212 TestMountStart/serial/VerifyMountPostDelete 8.9
213 TestMountStart/serial/Stop 28.84
214 TestMountStart/serial/RestartStopped 115.02
215 TestMountStart/serial/VerifyMountPostStop 9.02
218 TestMultiNode/serial/FreshStart2Nodes 409.99
219 TestMultiNode/serial/DeployApp2Nodes 8.6
221 TestMultiNode/serial/AddNode 219.51
222 TestMultiNode/serial/MultiNodeLabels 0.18
223 TestMultiNode/serial/ProfileList 9.45
224 TestMultiNode/serial/CopyFile 348.34
225 TestMultiNode/serial/StopNode 73.61
226 TestMultiNode/serial/StartAfterStop 178.4
232 TestPreload 562.92
233 TestScheduledStopWindows 321.67
x
+
TestDownloadOnly/v1.20.0/json-events (16.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-808700 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-808700 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (16.4342991s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (16.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-808700
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-808700: exit status 85 (210.5647ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-808700 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:08 PDT |          |
	|         | -p download-only-808700        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 16:08:37
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 16:08:37.553178    7948 out.go:291] Setting OutFile to fd 640 ...
	I0428 16:08:37.554336    7948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:08:37.554336    7948 out.go:304] Setting ErrFile to fd 644...
	I0428 16:08:37.554336    7948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0428 16:08:37.561147    7948 root.go:314] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0428 16:08:37.578800    7948 out.go:298] Setting JSON to true
	I0428 16:08:37.584255    7948 start.go:129] hostinfo: {"hostname":"minikube1","uptime":3160,"bootTime":1714342556,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 16:08:37.584783    7948 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 16:08:37.594006    7948 out.go:97] [download-only-808700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 16:08:37.594006    7948 notify.go:220] Checking for updates...
	I0428 16:08:37.597516    7948 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	W0428 16:08:37.594006    7948 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0428 16:08:37.600178    7948 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 16:08:37.603577    7948 out.go:169] MINIKUBE_LOCATION=17977
	I0428 16:08:37.605750    7948 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0428 16:08:37.611405    7948 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0428 16:08:37.612365    7948 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 16:08:42.737345    7948 out.go:97] Using the hyperv driver based on user configuration
	I0428 16:08:42.737345    7948 start.go:297] selected driver: hyperv
	I0428 16:08:42.737345    7948 start.go:901] validating driver "hyperv" against <nil>
	I0428 16:08:42.738031    7948 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 16:08:42.791055    7948 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0428 16:08:42.792351    7948 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0428 16:08:42.792685    7948 cni.go:84] Creating CNI manager for ""
	I0428 16:08:42.792685    7948 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0428 16:08:42.792839    7948 start.go:340] cluster config:
	{Name:download-only-808700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-808700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 16:08:42.793974    7948 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 16:08:42.797962    7948 out.go:97] Downloading VM boot image ...
	I0428 16:08:42.797962    7948 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.0-1713736271-18706-amd64.iso
	I0428 16:08:46.606874    7948 out.go:97] Starting "download-only-808700" primary control-plane node in "download-only-808700" cluster
	I0428 16:08:46.611313    7948 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0428 16:08:46.651272    7948 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0428 16:08:46.651272    7948 cache.go:56] Caching tarball of preloaded images
	I0428 16:08:46.651816    7948 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0428 16:08:46.821144    7948 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0428 16:08:46.822851    7948 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0428 16:08:46.900445    7948 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0428 16:08:50.682136    7948 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0428 16:08:50.683303    7948 preload.go:255] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0428 16:08:51.706963    7948 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0428 16:08:51.713635    7948 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-808700\config.json ...
	I0428 16:08:51.713973    7948 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-808700\config.json: {Name:mkfcb0fa57f6fa6c6c489cc5c17d90bc3cd978cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:08:51.714333    7948 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0428 16:08:51.715582    7948 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-808700 host does not exist
	  To start a cluster, run: "minikube start -p download-only-808700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:08:53.975604    5288 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.4587677s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-808700
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-808700: (1.3823765s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (10.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-975500 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-975500 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperv: (10.9479853s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (10.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-975500
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-975500: exit status 85 (218.339ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-808700 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:08 PDT |                     |
	|         | -p download-only-808700        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:08 PDT | 28 Apr 24 16:08 PDT |
	| delete  | -p download-only-808700        | download-only-808700 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:08 PDT | 28 Apr 24 16:08 PDT |
	| start   | -o=json --download-only        | download-only-975500 | minikube1\jenkins | v1.33.0 | 28 Apr 24 16:08 PDT |                     |
	|         | -p download-only-975500        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 16:08:57
	Running on machine: minikube1
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 16:08:57.070013    9928 out.go:291] Setting OutFile to fd 764 ...
	I0428 16:08:57.070548    9928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:08:57.070548    9928 out.go:304] Setting ErrFile to fd 768...
	I0428 16:08:57.070548    9928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:08:57.093871    9928 out.go:298] Setting JSON to true
	I0428 16:08:57.098048    9928 start.go:129] hostinfo: {"hostname":"minikube1","uptime":3180,"bootTime":1714342556,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 16:08:57.098162    9928 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 16:08:57.102913    9928 out.go:97] [download-only-975500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 16:08:57.102913    9928 notify.go:220] Checking for updates...
	I0428 16:08:57.105641    9928 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 16:08:57.108096    9928 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 16:08:57.110123    9928 out.go:169] MINIKUBE_LOCATION=17977
	I0428 16:08:57.113203    9928 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0428 16:08:57.123739    9928 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0428 16:08:57.123955    9928 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 16:09:02.257409    9928 out.go:97] Using the hyperv driver based on user configuration
	I0428 16:09:02.257409    9928 start.go:297] selected driver: hyperv
	I0428 16:09:02.257409    9928 start.go:901] validating driver "hyperv" against <nil>
	I0428 16:09:02.258227    9928 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 16:09:02.303058    9928 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0428 16:09:02.303723    9928 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0428 16:09:02.304257    9928 cni.go:84] Creating CNI manager for ""
	I0428 16:09:02.304257    9928 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0428 16:09:02.304257    9928 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0428 16:09:02.304705    9928 start.go:340] cluster config:
	{Name:download-only-975500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-975500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 16:09:02.304705    9928 iso.go:125] acquiring lock: {Name:mk09a1bdb7773256ffaf72c9993441107ac36a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 16:09:02.308509    9928 out.go:97] Starting "download-only-975500" primary control-plane node in "download-only-975500" cluster
	I0428 16:09:02.308509    9928 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 16:09:02.336902    9928 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 16:09:02.350410    9928 cache.go:56] Caching tarball of preloaded images
	I0428 16:09:02.350735    9928 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 16:09:02.354367    9928 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0428 16:09:02.354367    9928 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0428 16:09:02.428639    9928 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4?checksum=md5:00b6acf85a82438f3897c0a6fafdcee7 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0428 16:09:05.651275    9928 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0428 16:09:05.660286    9928 preload.go:255] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0428 16:09:06.571712    9928 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0428 16:09:06.575119    9928 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-975500\config.json ...
	I0428 16:09:06.575634    9928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-975500\config.json: {Name:mk49ec7e86910d771a010041420d67d4484d8d22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 16:09:06.576968    9928 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0428 16:09:06.577507    9928 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.30.0/kubectl.exe
	
	
	* The control-plane node download-only-975500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-975500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:09:08.018952    8256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (1.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2723549s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (1.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-975500
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-975500: (1.43796s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.44s)

                                                
                                    
x
+
TestBinaryMirror (6.89s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-654900 --alsologtostderr --binary-mirror http://127.0.0.1:64339 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-654900 --alsologtostderr --binary-mirror http://127.0.0.1:64339 --driver=hyperv: (6.0210077s)
helpers_test.go:175: Cleaning up "binary-mirror-654900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-654900
--- PASS: TestBinaryMirror (6.89s)

                                                
                                    
x
+
TestOffline (555.92s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-069600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-069600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (8m34.2447857s)
helpers_test.go:175: Cleaning up "offline-docker-069600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-069600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-069600: (41.674059s)
--- PASS: TestOffline (555.92s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-610300
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-610300: exit status 85 (219.5478ms)

                                                
                                                
-- stdout --
	* Profile "addons-610300" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-610300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:09:20.533058    1928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-610300
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-610300: exit status 85 (282.5816ms)

                                                
                                                
-- stdout --
	* Profile "addons-610300" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-610300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:09:20.536375   11028 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/Setup (375.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-610300 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-610300 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m15.7288869s)
--- PASS: TestAddons/Setup (375.73s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (63.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-610300 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-610300 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-610300 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [636895b9-db6c-4b2a-9ef9-cd409d8a6c4c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [636895b9-db6c-4b2a-9ef9-cd409d8a6c4c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.0181965s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-610300 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-610300 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (8.8945451s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-610300 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0428 16:17:16.829262    9720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-610300 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-610300 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-610300 ip: (2.2787028s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.27.234.130
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-610300 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-610300 addons disable ingress-dns --alsologtostderr -v=1: (15.0841897s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-610300 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-610300 addons disable ingress --alsologtostderr -v=1: (21.0000921s)
--- PASS: TestAddons/parallel/Ingress (63.34s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (27.68s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vg6dv" [b790d080-a046-4cb5-8d24-03ea6531cfd7] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0130696s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-610300
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-610300: (21.6606871s)
--- PASS: TestAddons/parallel/InspektorGadget (27.68s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (20.61s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 22.3603ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-2xflh" [e2d99e75-170b-4505-8e11-c78cd387eaaf] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0226655s
addons_test.go:415: (dbg) Run:  kubectl --context addons-610300 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-610300 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-610300 addons disable metrics-server --alsologtostderr -v=1: (15.1227233s)
--- PASS: TestAddons/parallel/MetricsServer (20.61s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (28.19s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.4667ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-cwcf6" [c08f6083-573b-4268-ad97-bbcfd146fef1] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0130031s
addons_test.go:473: (dbg) Run:  kubectl --context addons-610300 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-610300 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.3016031s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-610300 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-610300 addons disable helm-tiller --alsologtostderr -v=1: (14.8423267s)
--- PASS: TestAddons/parallel/HelmTiller (28.19s)

                                                
                                    
x
+
TestAddons/parallel/CSI (90.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 43.8091ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-610300 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-610300 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [38357464-82d5-4103-beb1-c46d8716d3f6] Pending
helpers_test.go:344: "task-pv-pod" [38357464-82d5-4103-beb1-c46d8716d3f6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [38357464-82d5-4103-beb1-c46d8716d3f6] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 22.0113535s
addons_test.go:584: (dbg) Run:  kubectl --context addons-610300 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-610300 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-610300 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-610300 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-610300 delete pod task-pv-pod: (1.7905973s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-610300 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-610300 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-610300 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7d4c4d64-76d9-4289-a1fd-9270069e4e26] Pending
helpers_test.go:344: "task-pv-pod-restore" [7d4c4d64-76d9-4289-a1fd-9270069e4e26] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7d4c4d64-76d9-4289-a1fd-9270069e4e26] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0179707s
addons_test.go:626: (dbg) Run:  kubectl --context addons-610300 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-610300 delete pod task-pv-pod-restore: (1.3214021s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-610300 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-610300 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-610300 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-610300 addons disable csi-hostpath-driver --alsologtostderr -v=1: (23.2861184s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-610300 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-610300 addons disable volumesnapshots --alsologtostderr -v=1: (16.6643332s)
--- PASS: TestAddons/parallel/CSI (90.38s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (36.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-610300 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-610300 --alsologtostderr -v=1: (17.2052591s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-smhrc" [ba0cb936-b707-403c-bf43-115ad76ab923] Pending
helpers_test.go:344: "headlamp-7559bf459f-smhrc" [ba0cb936-b707-403c-bf43-115ad76ab923] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-smhrc" [ba0cb936-b707-403c-bf43-115ad76ab923] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 19.0104276s
--- PASS: TestAddons/parallel/Headlamp (36.23s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-vfs7g" [3fb9cfb6-fe2d-4e9f-8825-7c68e81ba4f6] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0109257s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-610300
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-610300: (16.7973903s)
--- PASS: TestAddons/parallel/CloudSpanner (21.83s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (39.99s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-610300 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-610300 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610300 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [40004951-52f1-45f8-a0b5-201852efbee8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [40004951-52f1-45f8-a0b5-201852efbee8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [40004951-52f1-45f8-a0b5-201852efbee8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 14.0183115s
addons_test.go:891: (dbg) Run:  kubectl --context addons-610300 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-610300 ssh "cat /opt/local-path-provisioner/pvc-449e89c9-f392-43ed-ae7e-bcdaa8a76677_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-610300 ssh "cat /opt/local-path-provisioner/pvc-449e89c9-f392-43ed-ae7e-bcdaa8a76677_default_test-pvc/file1": (9.620551s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-610300 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-610300 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-610300 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-610300 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (7.7072904s)
--- PASS: TestAddons/parallel/LocalPath (39.99s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (19.87s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-p6hd4" [5ba420ef-3163-4b78-9972-5616fc2381f7] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0218424s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-610300
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-610300: (14.8329843s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (19.87s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-jjtc6" [5e38f255-3ba2-47eb-828e-6c0c03bdaa25] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0164758s
--- PASS: TestAddons/parallel/Yakd (6.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-610300 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-610300 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (51.56s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-610300
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-610300: (39.7412541s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-610300
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-610300: (4.8601542s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-610300
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-610300: (4.482232s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-610300
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-610300: (2.4532038s)
--- PASS: TestAddons/StoppedEnableDisable (51.56s)

                                                
                                    
x
+
TestDockerFlags (250.03s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-069600 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-069600 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (3m6.4998004s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-069600 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-069600 ssh "sudo systemctl show docker --property=Environment --no-pager": (9.4795508s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-069600 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-069600 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (9.5581682s)
helpers_test.go:175: Cleaning up "docker-flags-069600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-069600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-069600: (44.4896173s)
--- PASS: TestDockerFlags (250.03s)

                                                
                                    
x
+
TestForceSystemdEnv (639.53s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-844300 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
E0428 18:53:41.026157    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 18:55:36.444496    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-844300 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (9m50.9543513s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-844300 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-844300 ssh "docker info --format {{.CgroupDriver}}": (9.6792644s)
helpers_test.go:175: Cleaning up "force-systemd-env-844300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-844300
E0428 19:03:41.025723    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-844300: (38.8985704s)
--- PASS: TestForceSystemdEnv (639.53s)

                                                
                                    
x
+
TestErrorSpam/start (16.71s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 start --dry-run: (5.5576037s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 start --dry-run: (5.5693812s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 start --dry-run: (5.5789205s)
--- PASS: TestErrorSpam/start (16.71s)

                                                
                                    
x
+
TestErrorSpam/status (35.64s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 status: (12.3043229s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 status: (11.5591389s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 status: (11.7763179s)
--- PASS: TestErrorSpam/status (35.64s)

                                                
                                    
x
+
TestErrorSpam/pause (22.31s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 pause: (7.6178363s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 pause: (7.4068234s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 pause: (7.2861311s)
--- PASS: TestErrorSpam/pause (22.31s)

                                                
                                    
x
+
TestErrorSpam/unpause (22.05s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 unpause: (7.4326154s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 unpause: (7.3338673s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 unpause: (7.2829153s)
--- PASS: TestErrorSpam/unpause (22.05s)

                                                
                                    
x
+
TestErrorSpam/stop (59.09s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 stop
E0428 16:25:36.424910    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 stop: (38.1481676s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 stop: (10.566307s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-906500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-906500 stop: (10.3685316s)
--- PASS: TestErrorSpam/stop (59.09s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\3228\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (235.91s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-285400 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-285400 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m55.9008648s)
--- PASS: TestFunctional/serial/StartWithProxy (235.91s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (126.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-285400 --alsologtostderr -v=8
E0428 16:30:36.430679    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-285400 --alsologtostderr -v=8: (2m6.8291156s)
functional_test.go:659: soft start took 2m6.8321447s for "functional-285400" cluster.
--- PASS: TestFunctional/serial/SoftStart (126.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-285400 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (25.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 cache add registry.k8s.io/pause:3.1: (8.6675523s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 cache add registry.k8s.io/pause:3.3: (8.4399712s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 cache add registry.k8s.io/pause:latest: (8.2756007s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (25.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (10.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-285400 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3303377207\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-285400 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3303377207\001: (2.2911419s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 cache add minikube-local-cache-test:functional-285400
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 cache add minikube-local-cache-test:functional-285400: (7.99552s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 cache delete minikube-local-cache-test:functional-285400
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-285400
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (10.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 ssh sudo crictl images: (8.7524028s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (34.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 ssh sudo docker rmi registry.k8s.io/pause:latest: (8.8676617s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-285400 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (8.7730562s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:33:10.793113    8028 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 cache reload: (7.5866976s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (8.9464088s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (34.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.40s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 kubectl -- --context functional-285400 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (168.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 logs: (2m48.59254s)
--- PASS: TestFunctional/serial/LogsCmd (168.59s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (180.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2173681202\001\logs.txt
E0428 16:45:36.419585    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2173681202\001\logs.txt: (3m0.7227277s)
--- PASS: TestFunctional/serial/LogsFileCmd (180.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (18.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 ssh "echo hello": (9.577857s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 ssh "cat /etc/hostname": (9.3151587s)
--- PASS: TestFunctional/parallel/SSHCmd (18.89s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (59.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.6840395s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh -n functional-285400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 ssh -n functional-285400 "sudo cat /home/docker/cp-test.txt": (10.1837766s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 cp functional-285400:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd3335772000\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 cp functional-285400:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd3335772000\001\cp-test.txt: (10.4438248s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh -n functional-285400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 ssh -n functional-285400 "sudo cat /home/docker/cp-test.txt": (11.306676s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.4580563s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh -n functional-285400 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 ssh -n functional-285400 "sudo cat /tmp/does/not/exist/cp-test.txt": (10.0151079s)
--- PASS: TestFunctional/parallel/CpCmd (59.10s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (8.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/3228/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh "sudo cat /etc/test/nested/copy/3228/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 ssh "sudo cat /etc/test/nested/copy/3228/hosts": (8.9455078s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (8.95s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (54.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/3228.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh "sudo cat /etc/ssl/certs/3228.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 ssh "sudo cat /etc/ssl/certs/3228.pem": (9.1588971s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/3228.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh "sudo cat /usr/share/ca-certificates/3228.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 ssh "sudo cat /usr/share/ca-certificates/3228.pem": (9.1004169s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 ssh "sudo cat /etc/ssl/certs/51391683.0": (8.9823453s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/32282.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh "sudo cat /etc/ssl/certs/32282.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 ssh "sudo cat /etc/ssl/certs/32282.pem": (8.8656298s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/32282.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh "sudo cat /usr/share/ca-certificates/32282.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 ssh "sudo cat /usr/share/ca-certificates/32282.pem": (9.6498328s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (9.097794s)
--- PASS: TestFunctional/parallel/CertSync (54.86s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (10.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-285400 ssh "sudo systemctl is-active crio": exit status 1 (10.6276962s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:48:04.555250   12204 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (10.63s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.2236153s)
--- PASS: TestFunctional/parallel/License (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (12.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.6442166s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (12.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (11.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (10.9150401s)
functional_test.go:1311: Took "10.9153543s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "193.1233ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (11.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 version --short
--- PASS: TestFunctional/parallel/Version/short (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (7.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 version -o=json --components: (7.2778787s)
--- PASS: TestFunctional/parallel/Version/components (7.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (11.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (10.8168976s)
functional_test.go:1362: Took "10.8174679s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "239.0383ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (11.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.3495564s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-285400
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-285400 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-285400 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2312: OpenProcess: The parameter is incorrect.
helpers_test.go:502: unable to terminate pid 6484: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (120.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 image rm gcr.io/google-containers/addon-resizer:functional-285400 --alsologtostderr
E0428 16:55:36.427224    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 image rm gcr.io/google-containers/addon-resizer:functional-285400 --alsologtostderr: (1m0.2246621s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 image ls: (1m0.4722992s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (120.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (60.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-285400
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 image save --daemon gcr.io/google-containers/addon-resizer:functional-285400 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 image save --daemon gcr.io/google-containers/addon-resizer:functional-285400 --alsologtostderr: (59.805337s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-285400
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (60.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 update-context --alsologtostderr -v=2: (2.2421712s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 update-context --alsologtostderr -v=2: (2.2596643s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-285400 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-285400 update-context --alsologtostderr -v=2: (2.2220119s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.22s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.47s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-285400
--- PASS: TestFunctional/delete_addon-resizer_images (0.47s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-285400
--- PASS: TestFunctional/delete_my-image_image (0.19s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-285400
--- PASS: TestFunctional/delete_minikube_cached_images (0.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-267500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.17s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (191.6s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-402800 --driver=hyperv
E0428 17:40:36.424782    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 17:41:44.196273    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-402800 --driver=hyperv: (3m11.5964534s)
--- PASS: TestImageBuild/serial/Setup (191.60s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.3s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-402800
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-402800: (9.3040969s)
--- PASS: TestImageBuild/serial/NormalBuild (9.30s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-402800
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-402800: (8.5451931s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.55s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-402800
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-402800: (7.4652571s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.47s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.35s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-402800
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-402800: (7.3491524s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.35s)

                                                
                                    
x
+
TestJSONOutput/start/Command (239.55s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-611800 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0428 17:43:39.610290    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 17:43:41.003837    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:45:36.427605    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-611800 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m59.5517323s)
--- PASS: TestJSONOutput/start/Command (239.55s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-611800 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-611800 --output=json --user=testUser: (7.5600056s)
--- PASS: TestJSONOutput/pause/Command (7.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.53s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-611800 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-611800 --output=json --user=testUser: (7.5296062s)
--- PASS: TestJSONOutput/unpause/Command (7.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (33.63s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-611800 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-611800 --output=json --user=testUser: (33.6265865s)
--- PASS: TestJSONOutput/stop/Command (33.63s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.42s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-664600 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-664600 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (242.6372ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b6e1a29d-cdf6-44d0-892a-e19c31f4d7a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-664600] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d85c45cb-3edd-416f-8e1a-3970fe49ee76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube1\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"a3e919c3-3ab9-4e6d-b7f3-c3ca221a7c47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"737fb454-2256-41b0-b61c-e6d8e85b36c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"c97c27c0-eddf-4c7c-8814-29dcd1c038be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17977"}}
	{"specversion":"1.0","id":"eb04f13c-09fc-4055-850c-bd8539c95166","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f5384ba5-e719-4042-9267-a23fab4a17e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 17:48:22.707881   10688 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-664600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-664600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-664600: (1.1753855s)
--- PASS: TestErrorJSONOutput (1.42s)

                                                
                                    
x
+
TestMainNoArgs (0.21s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.21s)

                                                
                                    
x
+
TestMinikubeProfile (505.74s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-587500 --driver=hyperv
E0428 17:48:41.016088    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:50:36.428751    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-587500 --driver=hyperv: (3m10.3515033s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-587500 --driver=hyperv
E0428 17:53:41.015994    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-587500 --driver=hyperv: (3m13.6072967s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-587500
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (18.3524269s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-587500
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (18.4091665s)
helpers_test.go:175: Cleaning up "second-587500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-587500
E0428 17:55:36.436335    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-587500: (44.7387066s)
helpers_test.go:175: Cleaning up "first-587500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-587500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-587500: (39.5215537s)
--- PASS: TestMinikubeProfile (505.74s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (151.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-995600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0428 17:58:24.209331    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 17:58:41.006002    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-995600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m30.4573192s)
--- PASS: TestMountStart/serial/StartWithMountFirst (151.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.12s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-995600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-995600 ssh -- ls /minikube-host: (9.122897s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.12s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (149.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-995600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0428 18:00:19.623366    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 18:00:36.435138    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-995600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m28.040714s)
--- PASS: TestMountStart/serial/StartWithMountSecond (149.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.11s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-995600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-995600 ssh -- ls /minikube-host: (9.1060916s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.11s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (26.55s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-995600 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-995600 --alsologtostderr -v=5: (26.5472759s)
--- PASS: TestMountStart/serial/DeleteFirst (26.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (8.9s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-995600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-995600 ssh -- ls /minikube-host: (8.8976941s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (8.90s)

                                                
                                    
x
+
TestMountStart/serial/Stop (28.84s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-995600
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-995600: (28.841683s)
--- PASS: TestMountStart/serial/Stop (28.84s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (115.02s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-995600
E0428 18:03:41.007434    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-995600: (1m54.0147243s)
--- PASS: TestMountStart/serial/RestartStopped (115.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.02s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-995600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-995600 ssh -- ls /minikube-host: (9.0163626s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.02s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (409.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-788600 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0428 18:08:41.013362    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 18:10:36.433568    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-788600 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m26.9548761s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 status --alsologtostderr: (23.0316157s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (409.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- rollout status deployment/busybox: (3.0024588s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- exec busybox-fc5497c4f-4fdn6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- exec busybox-fc5497c4f-4fdn6 -- nslookup kubernetes.io: (1.9914669s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- exec busybox-fc5497c4f-4qvlm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- exec busybox-fc5497c4f-4fdn6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- exec busybox-fc5497c4f-4qvlm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- exec busybox-fc5497c4f-4fdn6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-788600 -- exec busybox-fc5497c4f-4qvlm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.60s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (219.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-788600 -v 3 --alsologtostderr
E0428 18:15:04.219326    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 18:15:36.427011    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-788600 -v 3 --alsologtostderr: (3m5.3100016s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 status --alsologtostderr
E0428 18:16:59.626612    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 status --alsologtostderr: (34.2015481s)
--- PASS: TestMultiNode/serial/AddNode (219.51s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-788600 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (9.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.4466057s)
--- PASS: TestMultiNode/serial/ProfileList (9.45s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (348.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 status --output json --alsologtostderr: (34.5930217s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 cp testdata\cp-test.txt multinode-788600:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 cp testdata\cp-test.txt multinode-788600:/home/docker/cp-test.txt: (9.2180721s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600 "sudo cat /home/docker/cp-test.txt": (9.0893849s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile2232407997\001\cp-test_multinode-788600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile2232407997\001\cp-test_multinode-788600.txt: (9.0453077s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600 "sudo cat /home/docker/cp-test.txt"
E0428 18:18:41.015934    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600 "sudo cat /home/docker/cp-test.txt": (9.0223205s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600:/home/docker/cp-test.txt multinode-788600-m02:/home/docker/cp-test_multinode-788600_multinode-788600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600:/home/docker/cp-test.txt multinode-788600-m02:/home/docker/cp-test_multinode-788600_multinode-788600-m02.txt: (15.8263968s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600 "sudo cat /home/docker/cp-test.txt": (9.114575s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m02 "sudo cat /home/docker/cp-test_multinode-788600_multinode-788600-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m02 "sudo cat /home/docker/cp-test_multinode-788600_multinode-788600-m02.txt": (9.1590794s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600:/home/docker/cp-test.txt multinode-788600-m03:/home/docker/cp-test_multinode-788600_multinode-788600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600:/home/docker/cp-test.txt multinode-788600-m03:/home/docker/cp-test_multinode-788600_multinode-788600-m03.txt: (16.0219572s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600 "sudo cat /home/docker/cp-test.txt": (9.0375622s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m03 "sudo cat /home/docker/cp-test_multinode-788600_multinode-788600-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m03 "sudo cat /home/docker/cp-test_multinode-788600_multinode-788600-m03.txt": (9.078444s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 cp testdata\cp-test.txt multinode-788600-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 cp testdata\cp-test.txt multinode-788600-m02:/home/docker/cp-test.txt: (9.0837236s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m02 "sudo cat /home/docker/cp-test.txt": (9.2342247s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile2232407997\001\cp-test_multinode-788600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile2232407997\001\cp-test_multinode-788600-m02.txt: (9.2457198s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m02 "sudo cat /home/docker/cp-test.txt": (9.3233381s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600-m02:/home/docker/cp-test.txt multinode-788600:/home/docker/cp-test_multinode-788600-m02_multinode-788600.txt
E0428 18:20:36.435287    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600-m02:/home/docker/cp-test.txt multinode-788600:/home/docker/cp-test_multinode-788600-m02_multinode-788600.txt: (16.1227146s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m02 "sudo cat /home/docker/cp-test.txt": (9.182012s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600 "sudo cat /home/docker/cp-test_multinode-788600-m02_multinode-788600.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600 "sudo cat /home/docker/cp-test_multinode-788600-m02_multinode-788600.txt": (8.947637s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600-m02:/home/docker/cp-test.txt multinode-788600-m03:/home/docker/cp-test_multinode-788600-m02_multinode-788600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600-m02:/home/docker/cp-test.txt multinode-788600-m03:/home/docker/cp-test_multinode-788600-m02_multinode-788600-m03.txt: (15.7911291s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m02 "sudo cat /home/docker/cp-test.txt": (9.1346599s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m03 "sudo cat /home/docker/cp-test_multinode-788600-m02_multinode-788600-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m03 "sudo cat /home/docker/cp-test_multinode-788600-m02_multinode-788600-m03.txt": (9.2511255s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 cp testdata\cp-test.txt multinode-788600-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 cp testdata\cp-test.txt multinode-788600-m03:/home/docker/cp-test.txt: (9.0346456s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m03 "sudo cat /home/docker/cp-test.txt": (9.124352s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile2232407997\001\cp-test_multinode-788600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile2232407997\001\cp-test_multinode-788600-m03.txt: (8.9602186s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m03 "sudo cat /home/docker/cp-test.txt": (9.0033037s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600-m03:/home/docker/cp-test.txt multinode-788600:/home/docker/cp-test_multinode-788600-m03_multinode-788600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600-m03:/home/docker/cp-test.txt multinode-788600:/home/docker/cp-test_multinode-788600-m03_multinode-788600.txt: (15.9176683s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m03 "sudo cat /home/docker/cp-test.txt": (9.0687781s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600 "sudo cat /home/docker/cp-test_multinode-788600-m03_multinode-788600.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600 "sudo cat /home/docker/cp-test_multinode-788600-m03_multinode-788600.txt": (9.0704858s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600-m03:/home/docker/cp-test.txt multinode-788600-m02:/home/docker/cp-test_multinode-788600-m03_multinode-788600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 cp multinode-788600-m03:/home/docker/cp-test.txt multinode-788600-m02:/home/docker/cp-test_multinode-788600-m03_multinode-788600-m02.txt: (15.6464292s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m03 "sudo cat /home/docker/cp-test.txt": (8.9737903s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m02 "sudo cat /home/docker/cp-test_multinode-788600-m03_multinode-788600-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 ssh -n multinode-788600-m02 "sudo cat /home/docker/cp-test_multinode-788600-m03_multinode-788600-m02.txt": (8.9900896s)
--- PASS: TestMultiNode/serial/CopyFile (348.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (73.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 node stop m03
E0428 18:23:41.012512    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 node stop m03: (23.7853328s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-788600 status: exit status 7 (24.8508676s)

                                                
                                                
-- stdout --
	multinode-788600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-788600-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-788600-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 18:23:42.612339   14368 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-788600 status --alsologtostderr: exit status 7 (24.9677143s)

                                                
                                                
-- stdout --
	multinode-788600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-788600-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-788600-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 18:24:07.477246    4604 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0428 18:24:07.484842    4604 out.go:291] Setting OutFile to fd 1120 ...
	I0428 18:24:07.485780    4604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 18:24:07.485780    4604 out.go:304] Setting ErrFile to fd 1128...
	I0428 18:24:07.485780    4604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 18:24:07.503292    4604 out.go:298] Setting JSON to false
	I0428 18:24:07.503500    4604 mustload.go:65] Loading cluster: multinode-788600
	I0428 18:24:07.503645    4604 notify.go:220] Checking for updates...
	I0428 18:24:07.504282    4604 config.go:182] Loaded profile config "multinode-788600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 18:24:07.504395    4604 status.go:255] checking status of multinode-788600 ...
	I0428 18:24:07.505491    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:24:09.605679    4604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:24:09.605679    4604 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:24:09.605679    4604 status.go:330] multinode-788600 host status = "Running" (err=<nil>)
	I0428 18:24:09.605835    4604 host.go:66] Checking if "multinode-788600" exists ...
	I0428 18:24:09.606617    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:24:11.693349    4604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:24:11.693349    4604 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:24:11.693349    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:24:14.115242    4604 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:24:14.115689    4604 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:24:14.115784    4604 host.go:66] Checking if "multinode-788600" exists ...
	I0428 18:24:14.129779    4604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0428 18:24:14.129779    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600 ).state
	I0428 18:24:16.140969    4604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:24:16.140969    4604 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:24:16.141055    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600 ).networkadapters[0]).ipaddresses[0]
	I0428 18:24:18.651040    4604 main.go:141] libmachine: [stdout =====>] : 172.27.231.169
	
	I0428 18:24:18.651348    4604 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:24:18.651496    4604 sshutil.go:53] new ssh client: &{IP:172.27.231.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600\id_rsa Username:docker}
	I0428 18:24:18.755874    4604 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.626085s)
	I0428 18:24:18.769566    4604 ssh_runner.go:195] Run: systemctl --version
	I0428 18:24:18.791667    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 18:24:18.816792    4604 kubeconfig.go:125] found "multinode-788600" server: "https://172.27.231.169:8443"
	I0428 18:24:18.816792    4604 api_server.go:166] Checking apiserver status ...
	I0428 18:24:18.832595    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 18:24:18.875433    4604 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2072/cgroup
	W0428 18:24:18.900031    4604 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2072/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0428 18:24:18.912050    4604 ssh_runner.go:195] Run: ls
	I0428 18:24:18.919688    4604 api_server.go:253] Checking apiserver healthz at https://172.27.231.169:8443/healthz ...
	I0428 18:24:18.926307    4604 api_server.go:279] https://172.27.231.169:8443/healthz returned 200:
	ok
	I0428 18:24:18.926307    4604 status.go:422] multinode-788600 apiserver status = Running (err=<nil>)
	I0428 18:24:18.926307    4604 status.go:257] multinode-788600 status: &{Name:multinode-788600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0428 18:24:18.926911    4604 status.go:255] checking status of multinode-788600-m02 ...
	I0428 18:24:18.927040    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:24:20.981358    4604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:24:20.981358    4604 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:24:20.981450    4604 status.go:330] multinode-788600-m02 host status = "Running" (err=<nil>)
	I0428 18:24:20.981450    4604 host.go:66] Checking if "multinode-788600-m02" exists ...
	I0428 18:24:20.982219    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:24:23.116405    4604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:24:23.116617    4604 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:24:23.116617    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:24:25.629237    4604 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:24:25.629237    4604 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:24:25.629430    4604 host.go:66] Checking if "multinode-788600-m02" exists ...
	I0428 18:24:25.644162    4604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0428 18:24:25.644162    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m02 ).state
	I0428 18:24:27.673498    4604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0428 18:24:27.673498    4604 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:24:27.674153    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-788600-m02 ).networkadapters[0]).ipaddresses[0]
	I0428 18:24:30.116005    4604 main.go:141] libmachine: [stdout =====>] : 172.27.230.221
	
	I0428 18:24:30.116062    4604 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:24:30.116062    4604 sshutil.go:53] new ssh client: &{IP:172.27.230.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-788600-m02\id_rsa Username:docker}
	I0428 18:24:30.221510    4604 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.5773379s)
	I0428 18:24:30.234854    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 18:24:30.262811    4604 status.go:257] multinode-788600-m02 status: &{Name:multinode-788600-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0428 18:24:30.262811    4604 status.go:255] checking status of multinode-788600-m03 ...
	I0428 18:24:30.263419    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-788600-m03 ).state
	I0428 18:24:32.288502    4604 main.go:141] libmachine: [stdout =====>] : Off
	
	I0428 18:24:32.288502    4604 main.go:141] libmachine: [stderr =====>] : 
	I0428 18:24:32.288502    4604 status.go:330] multinode-788600-m03 host status = "Stopped" (err=<nil>)
	I0428 18:24:32.289017    4604 status.go:343] host is not running, skipping remaining checks
	I0428 18:24:32.289017    4604 status.go:257] multinode-788600-m03 status: &{Name:multinode-788600-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (73.61s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (178.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 node start m03 -v=7 --alsologtostderr
E0428 18:25:36.435787    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 node start m03 -v=7 --alsologtostderr: (2m24.3632151s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-788600 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-788600 status -v=7 --alsologtostderr: (33.8569122s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (178.40s)

                                                
                                    
x
+
TestPreload (562.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-439400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0428 18:38:41.019747    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 18:40:36.434214    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-439400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m31.5301867s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-439400 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-439400 image pull gcr.io/k8s-minikube/busybox: (8.2417374s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-439400
E0428 18:43:41.015674    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-439400: (39.0855773s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-439400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0428 18:45:36.440166    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-439400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (3m15.15712s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-439400 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-439400 image list: (7.1338639s)
helpers_test.go:175: Cleaning up "test-preload-439400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-439400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-439400: (41.7697433s)
--- PASS: TestPreload (562.92s)

                                                
                                    
x
+
TestScheduledStopWindows (321.67s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-817200 --memory=2048 --driver=hyperv
E0428 18:48:24.249118    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 18:48:41.022406    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-285400\client.crt: The system cannot find the path specified.
E0428 18:50:19.644206    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
E0428 18:50:36.445280    3228 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-610300\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-817200 --memory=2048 --driver=hyperv: (3m11.2841771s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-817200 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-817200 --schedule 5m: (10.2316344s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-817200 -n scheduled-stop-817200
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-817200 -n scheduled-stop-817200: exit status 1 (10.0247482s)

                                                
                                                
** stderr ** 
	W0428 18:51:15.107257   12796 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-817200 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-817200 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.1268519s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-817200 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-817200 --schedule 5s: (10.0910155s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-817200
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-817200: exit status 7 (2.2274595s)

                                                
                                                
-- stdout --
	scheduled-stop-817200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 18:52:44.359682    9676 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-817200 -n scheduled-stop-817200
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-817200 -n scheduled-stop-817200: exit status 7 (2.1864087s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 18:52:46.578760    5288 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-817200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-817200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-817200: (26.4924932s)
--- PASS: TestScheduledStopWindows (321.67s)

                                                
                                    

Test skip (29/193)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-285400 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-285400 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 6264: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (7.74s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-285400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-285400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0295521s)

                                                
                                                
-- stdout --
	* [functional-285400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17977
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:47:48.543681   10508 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0428 16:47:48.546696   10508 out.go:291] Setting OutFile to fd 932 ...
	I0428 16:47:48.547686   10508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:47:48.547686   10508 out.go:304] Setting ErrFile to fd 996...
	I0428 16:47:48.547686   10508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:47:48.580285   10508 out.go:298] Setting JSON to false
	I0428 16:47:48.586291   10508 start.go:129] hostinfo: {"hostname":"minikube1","uptime":5511,"bootTime":1714342556,"procs":212,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 16:47:48.586291   10508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 16:47:48.591295   10508 out.go:177] * [functional-285400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 16:47:48.595382   10508 notify.go:220] Checking for updates...
	I0428 16:47:48.597996   10508 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 16:47:48.600555   10508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 16:47:48.603556   10508 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 16:47:48.605554   10508 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 16:47:48.607555   10508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 16:47:48.611548   10508 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 16:47:48.612548   10508 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-285400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-285400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0375359s)

                                                
                                                
-- stdout --
	* [functional-285400] minikube v1.33.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17977
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0428 16:47:43.505392   14936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0428 16:47:43.508321   14936 out.go:291] Setting OutFile to fd 716 ...
	I0428 16:47:43.509321   14936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:47:43.509321   14936 out.go:304] Setting ErrFile to fd 304...
	I0428 16:47:43.509321   14936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 16:47:43.536317   14936 out.go:298] Setting JSON to false
	I0428 16:47:43.544979   14936 start.go:129] hostinfo: {"hostname":"minikube1","uptime":5506,"bootTime":1714342556,"procs":209,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0428 16:47:43.545317   14936 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0428 16:47:43.551320   14936 out.go:177] * [functional-285400] minikube v1.33.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0428 16:47:43.555320   14936 notify.go:220] Checking for updates...
	I0428 16:47:43.557319   14936 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0428 16:47:43.560309   14936 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 16:47:43.562314   14936 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0428 16:47:43.565313   14936 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 16:47:43.567318   14936 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 16:47:43.571316   14936 config.go:182] Loaded profile config "functional-285400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0428 16:47:43.572319   14936 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard